Wednesday, April 22, 2026
ddba0ecf-9caa-47df-bd50-facea3ebed14
| Summary | ⛅️ Mostly clear until night. |
|---|---|
| Temperature Range | 11°C to 20°C (52°F to 68°F) |
| Feels Like | Low: 49°F | High: 71°F |
| Humidity | 77% |
| Wind | 12 km/h (7 mph), Direction: 245° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:08 AM / 🌇 07:25 PM |
| Moon Phase | Waxing Crescent (19%) |
| Cloud Cover | 17% |
| Pressure | 1015.24 hPa |
| Dew Point | 53.44°F |
| Visibility | 6.02 miles |
Tired of pulling in all of syn just to do: rust let input = parse_macro_input!(input as LitStr); let value = input.value();
litext does exactly that, nothing more, with zero dependencies. add it to your proc-macro crate and use the litext! macro:
```rust
pub fn my_macro(input: TokenStream) -> TokenStream { let value: String = litext!(input); // value is the fully unescaped string, just like syn gives you } ```
Plain strings, raw strings (r#"..."#), all escape sequences including \x1b, \u{...}, line continuations, with proper compile_error! output on bad input.
The trick that makes it work without proc-macro = true is just extern crate proc_macro in a normal lib crate. Simple once you know it, but took some digging.
This is as a learning project to understand how proc-macros work at the token level without hiding everything behind syn, but it turned into something useful so I figured I'd publish it.
Source on Codeberg: https://codeberg.org/razkar/litext Crates-io 0.2: https://crates.io/crates/litext
Built a LeetCode TUI called leetrs so you can solve and submit problems directly from the terminal
I built a terminal-based UI for LeetCode that lets you browse problems, code solutions, run tests, and submit — all without leaving your terminal.
I made it because I wanted a faster, more focused workflow without constantly switching between the browser, editor, and terminal. If you like working in the CLI, this might be useful.
What it does
Browse LeetCode problems from the terminal
View problem details and metadata
Write solutions in your preferred terminal workflow
Run tests
Submit solutions directly from the TUI
Stay fully inside the terminal while solving
Looking for feedback + contributors
The project is open source, and I’d love:
feedback on the UX and workflow
bug reports
feature suggestions
contributions from anyone interested in TUIs, developer tools, or LeetCode integrations
If you want to contribute, check out the repo here: https://github.com/shadowmkj/leetrs.git
If people are interested, I can also share more details about the architecture, implementation choices, and roadmap.
I've been working on a low-level library that includes a bunch of compile time assertion functions that can do a wide variety of things.
One of the functionalities of the checks module is a check that can determine whether or not a type has a niche, which is to say, you can determine if Option<T> has the same size as T. Simple enough. But I wanted to also have an assertion that could assert minimum number of niches. While working on the solution to that problem, I stumbled across the solution to a problem that I wasn't even looking for the solution for. I discovered a way to check if a type is impossible to construct in safe Rust.
Here is an example of a type that is impossible to construct:
pub enum ImpossibleZst {}
This enum has no variants, which means that there's no way to create an instance of it. That also makes it a zero-sized type, and there's no way to apply a repr to these enums that will change the size, so they will always be zero-sized.
I should probably take some time to explain niches in Rust to those that might not have heard of them. In Rust, niches are bit representations that are considered impossible to construct for a type. For example, the NonNull type as well as the NonZero types have a single niche, which makes it possible for Option<NonNull<T>> to have the same size and alignment as NonNull<T>.
This is a very useful memory saving optimization that the Rust foundation has provided us with.
Now that I've gotten that out of the way, I'll explain the trick that makes it possible to determine if a type is impossible to construct.
First, you'll need some helper types.
A type that occupies 255 niches, and has one typed variant. We'll call this Niche255.
pub enum Niche255<T> { T(T), Nx01, Nx02, Nx03, ..., Nxff, }
I'll spare you the wall of variants. But trust me, this enum has 256 variants total.
You can't apply repr(C, packed) to this enum, but we'll need a packed type or else this won't work.
```
pub struct PackedNiche255<T>(Niche255<T>); ```
Finally, we'll need another packed type, but this one will have a "header" byte.
```
pub struct PackedWithByte<T>(u8, T); ```
Now that we have these helper types, we can now check whether or not a type is impossible to construct:
pub const fn is_impossible_to_construct<T>() -> bool { size_of::<PackedNiche255<Option<T>>>() == size_of::<PackedWithByte<T>>() }
And that's all there is to it.
Now for the explanation.
Option<T> needs two discriminants, one of the Some variant, and one for the None variant. For some types, Option<T> (and other similar enums) can store the discriminant in of the None variant as one of the impossible bit representations of the T type. As I said before, this makes it possible for Option<T> to have the same size as T when T has impossible representations.
With Niche255, 255 discriminant slots are occupied already. The only slot remaining is for the T variant. If T is a zst (zero-sized type), then Niche255<T> will a size of 1.
When you then store an Option<T> inside Niche255, it needs at least one discriminant for the None variant. If the Some variant is impossible to construct, then it does not need the discriminant for Some, which means that there will be 256 total occupied slots for discriminants. If the type of T were possible to construct, then Option<T> would need two discriminants, and as a result Niche255<Option<T>> would occupy 257 discriminant slots, meaning that the discriminant will occupy two bytes. If the type were impossible to construct, the discriminant would only be a single byte, since there would only be 256 slots occupied. This means that the size of PackedNiche255<Option<T>> is 1 byte greater than T if T is impossible to construct, but if it is possible to construct, it will be 2 bytes greater.
I hope you enjoyed my write-up, I haven't done one of these in a while.
Edit: Playground Link
Also, I have been informed that the correct terminology is inhabited and uninhabited. Sorry for the confusion to anyone that understood the difference unlike myself.
RustSploit v0.4.9 is LIVE – The Rust pentest framework just leveled up HARD!Zero unwraps.
Post-quantum crypto. 239 modules. DoS rewritten for Gbps. New Telnet/Bluetooth/RCE exploits. Bruteforce streaming 250MB+ wordlists. SSRF & command injection nuked.
Universal targets. MCP + WebSocket full parity. 100% metadata. Security hardened to the core.Full release + one-liner install:
https://github.com/s-b-repo/rustsploit/releases/tag/v0.4.9#RustSploit
#RustPentest #CyberSecurity #InfoSec #PenTest #EthicalHacking #RedTeam #RustLang #PostQuantum #CVE #Hacking #CyberTools #DoS #BruteForce #PentestFramework #RustPoweredSploit #ZeroDayHunter #PenTestInRustEverything mentioned in the v0.4.9 release notes (full exhaustive list):
Core Stats & Buildv0.4.9 (latest)
0 errors, 0 warnings
Dependency sweep across 9 crates (tokio 1.51, rand 0.10, clap 4.6, btleplug 0.12, + more)
hex.rs now fully vendored (no external hex crate)
All 71 module banners are batch-safe
100% credential module metadata coverage
Dropped aws-lc-sys (317s build) → ring (~15s build)
Minimum Rust 1.85 (edition 2024)
Docs fully overhauled:
11 files corrected, phantom API endpoints removed, module catalog updated to 239 entries
Install one-liner added
183 exploits | 27 scanners | 29 creds
New Modules & FeaturesNew Telnet IAC flood module + full parser hardening (WILL/DO storm, subnegotiation bombs, NOP interleave, combined mode;
MAX_IAC_ROUNDS=64, MAX_DRAIN_BYTES=64KB, 4096-byte SB cap, batched responses)
Bluetooth Fast Pair exploitation module (WPair):
Device DB expanded 18→51 targets (Pixel Buds, Sony WH/WF, JBL, Bose QC35 II, Razer Hammerhead, B&O, etc.); new ActionRequest strategy (6th KBP mode); TUI → rustyline REPL; full API + check() support
9 new credential modules:
MySQL, PostgreSQL, VNC, IMAP, Redis, HTTP Basic, Elasticsearch, CouchDB, Memcached (native wire protocols, Arc-shared reqwest, RFC 3501 IMAP quoting, packet limits 1MB→64KB)
8 new scanner modules:
ssl_scanner, redis_scanner, vnc_scanner, snmp_scanner (BER/ASN.1), waf_detector (10 signatures), subdomain_scanner (wildcard detection), nbns_scanner → scanners now 15→23
20 new exploit modules (total 148 in this section, overall exploits 183):
erlang_otp_ssh_rce (CVE-2025-32433, CVSS 10.0), hpe_oneview_rce (CVE-2025-37164, CVSS 10.0), sonicwall_sma_rce (CVE-2025-40602, CVSS 9.8), tomcat_put_rce (CVE-2025-24813, CVSS 9.8), citrixbleed2 (CVE-2025-5777); all with info(), check(), full API
Universal target support: single IPs, CIDR, comma lists, file lists, random internet scan (no per-module changes)
Unlimited subnet scanning: hard 1M-host caps removed; configurable concurrency/max random/timeout via setg; lazy /0 iteration, adaptive progress %
Security & Hardening 6 GlobalOptions findings fixed (silent failures, key/value validation, lock contention)
Semaphore drain refactored to single syscall
10+ error-silence sites eliminated (loot, port scanner, SSH, jobs, MCP, workspace)
SSH/FTP/RDP combo overflow protection added
SSH/FTP/spool fixes:
command injection in sshpwn_session.rs (cd path) fixed with shell-escape; credential files → 0o600; FTP connection leak on FTPS fallback fixed; O_NOFOLLOW on spool;
DNS panic in ssh_bruteforce eliminated
SSRF hardening:
is_blocked_ip now full RFC private space (incl. IPv6 ::1, fe80::/10, fc00::/7); URL parsing bypass fixed (url::Url::parse handles @ and percent-encoded); file:// blocked; DNS fail-closed
Security hardening round 1:
path traversal (spool.rs, loot.rs), CSV injection (export.rs), command injection (2 exploits), silent JSON corruption (3 storage modules), lock-held-during-I/O (workspace.rs) → 17 fixes
Security hardening round 2:
6 bruteforce modulo-by-zero (empty wordlists), ssh_user_enum blocking async runtime (fixed via spawn_blocking), 5 scanner bugs (IPMI integer overflow, char-as-u8 truncation)
Zero .unwrap():
15 Rust panicking paths eliminated across 7 files; 0 .unwrap(), 0 TODO/FIXME, 0 panic!/unreachable!/unimplemented!, 0 #[allow(dead_code)]; regex moved to once_cell::Lazy
DoS Overhaul & PerformanceFull DoS module overhaul:
8 raw-socket modules (SYN flood, DNS/NTP/SSDP/Memcached amp, ICMP/UDP flood) rewritten with shared socket pools, pre-flight tests, proper error handling; 128KB thread stacks (64x less memory); null_syn_exhaustion now uses sendmmsg (32 packets/syscall)
DoS performance:
SYN flood (4MB buffer, 1400B MTU payloads, 32-packet batches, Gbps stats); TCP flood (concurrency 500→2000, RST teardown, multi-port RR); HTTP/2 Rapid Reset matches CVE-2023-44487 (50 HEADERS + RST burst)
Bruteforce EngineBruteforce engine refactor:
creds/utils.rs → src/utils/bruteforce.rs (~203 files updated); wordlists >250MB now stream 500K-line batches (no OOM); 10 duplicate EXCLUDED_RANGES removed; global lockout pause + backoff; FTP/Redis/HTTP false-positive fixes
15 critical/high bruteforce fixes:
CPU busy-spin drain loop, IPv6 /64 OOM (capped 1M hosts), Telnet IAC chunk corruption, concurrent file writes (mpsc single writer), MQTT CONNACK cap, L2TP avp_len==0 loop, channel deadlock
Major Telnet bruteforce overhaul:
verify_shell ("echo RS_VERIFIED"), new AuthSignal enum (Success/Failure/Reprompt/Lockout/Ambiguous), lockout detection (30s pause + cooperative wait), 15-device fingerprinting + cred prioritization, 55 default creds, multi-port, streaming wordlists
Scanning & AuditingScanner audit fixes:
/0 CIDR OOM (ping_sweep), unbounded HTTP body → 256KB streaming, Port 0 validation, wildcard DNS (3 random subdomains), TOCTOU file perms (cred_store/loot/global_options/pq_channel), saturating_mul capacity calcs
181-module audit:
all std::thread::sleep → tokio in SSH modules; 39 empty CVE refs populated; new require_root() for 9 raw-socket modules; 47 sites to unified build_http_client_with(); ~14 TcpStream::connect wrapped in tcp_connect_addr(); build.rs regex caching
Networking & APINetworking fixes:
6 unwraps removed; udp_bind IPv6-aware; RDP BER lengths >65535 (3/4-byte); 10 TcpStream::connect with timeout; SSH banner buffer 1024B; TCP_NODELAY on SMTP/POP3
MCP server + WebSocket API parity:
full MCP with stdout isolation/1MiB line cap/SSRF/binary-safe; WebSocket gains 15 commands (info/check/setg/creds/hosts/workspace/loot/export/jobs) → 27 endpoints total with shell parity; background jobs get per-run config
Concurrent API (breaking):
gag → task-local OutputBuffer; 4683 println! → mprintln!; std → tokio RwLock; semaphore 1 → num_cpus; structured Finding/Severity types
Async prompt system:
132+ modules no longer block tokio on stdin; 7 prompt + 8 cfg_prompt_* → async via spawn_blocking; mass scan no longer stalls
Output & BatchClean batch/mass-scan output:
163 banner functions skip in batch; prompt cache (concurrent tasks share one answer); mprintln_block! macro; per-task timeout from setg; BatchGuard race fixed with AtomicUsize
100% module metadata:
all 152 modules now have info(); 13 cred modules persist to ~/.rustsploit/creds.json (0o600 locked); 5 modules API compatibility fixed; MQTT real TLS; SSH Spray stop-on-success; POP3 exponential backoff
Post-Quantum & Breaking ChangesPost-quantum encryption (breaking):
TLS + Bearer tokens removed; replaced with ML-KEM-768 + X25519 hybrid (SSH-style identity keys, mutual auth, Double Ratchet ChaCha20-Poly1305 forward secrecy, 3 combined shared secrets); auto host key on first run; no TLS, no API keys, no #[allow] in PQ code
Framework Intro RustSploit core:
persistent global options, credential store, workspace/host tracking, loot management, JSON/CSV export, background jobs, spool logging, resource scripts, compile-time auto-discover modules, Shell + REST API from day one
Hello everyone,
Today I'm releasing v0.8.0 of wrkflw with a bunch of features I've been working on for a while.
wrkflw is a CLI that validates and runs GitHub Actions workflows locally, so you can iterate on CI without pushing a dozen "fix ci" commits to see if something works. It runs jobs in Docker, Podman, or directly on your machine via runtime emulation. There's a TUI for picking workflows and watching live logs, a validator that catches schema issues and bad expressions before you push, and it also works on GitLab CI files.
These are the new features:
${{ ... }} expression evaluatorGitHub Actions expressions are now evaluated the same way GitHub evaluates them. That means matrix.os, secrets.TOKEN, needs.build.outputs.version, steps.foo.outputs.bar, workflow-level env, and nested toJSON(needs) all work as you'd expect — returning proper nested objects instead of stringified blobs. If you've ever had a local runner silently pass a broken expression and execute a step it shouldn't have, this is the thing that fixes it.
Composite actions now work end-to-end, step outputs propagate back to the caller, env expressions resolve inside composite steps, and required inputs are cross-checked against the call site during validation. This was the biggest gap between "runs my workflow" and "actually emulates GitHub Actions."
New wrkflw watch subcommand that watches your repo and auto-reruns only the workflows whose paths: filter matches the changed files.
wrkflw run also gained --event, --diff, --changed-files, and --base-branch for simulating specific trigger contexts locally, including pull requests against a target branch. Strict-filter mode is on by default and refuses to run with --event but no change set, because that used to silently skip every paths:-gated workflow and the failure mode was invisible.
Under --runtime emulation, wrkflw now handles upload-artifact / download-artifact, the GitHub cache protocol, and inter-job outputs between dependent jobs. These were the pieces that used to make emulation feel like "jobs run in isolation and nothing is shared" — they now share things correctly.
The TUI got a full redesign on top of ratatui. New design system, properly laid out screens, job selection mode so you can pick individual jobs to run from the list, a tweaks overlay for toggles, and cleaner keybinding discoverability. It's also behind a cargo feature flag now, so you can install the plain CLI if you don't want the TUI.
Other improvements:
bash --noprofile --norc -e -o pipefail, so failures no longer get silently skipped--runtime emulation--job flag to run a specific job, --jobs to list themenv: valuesBreaking changes
A few worth checking before you upgrade, mostly around the strict-filter default, the new shell semantics, and the secret store format change. All documented with migration steps in BREAKING_CHANGES.md.
Installation:
cargo install wrkflw Runs via Docker, Podman, or --runtime emulation for a native process path.
Would love to hear your feedback, especially edge cases where a workflow behaves differently under wrkflw than on actual GitHub Actions. Those are the bugs I want to hunt.
Demo: https://github.com/bahdotsh/wrkflw/blob/main/demo.gif
Project: https://github.com/bahdotsh/wrkflw
Hey y’all, Truce https://github.com/truce-audio/truce is an audio plugin framework for Rust that can compile to any plugin format from a single codebase. JUCE exists, but I never liked how JUCE is owned by iLok, and JUCE just felt very 2005. It’s also incredibly bloated after 20 years. With truce, you can get your own plugin up and running in a matter of minutes, with nothing you don’t need (do you really need that JavaScript interpreter in your audio plugin?)
If there are any other fellow audio/music heads out there, I’d love to get some feedback!
Here’s a free analyzer plugin I built with truce, aimed at debugging/reverse engineering plugins: https://github.com/truce-audio/truce-analyzer
Hi r/rust,
I wanted to share Oxanus, a Redis-backed job processing library for Rust that we've been working on for almost a year now.
It has been powering the background job infrastructure behind Player.gg and Firstlook.gg, serving hundreds of studios and millions of players.
The project is opinionated in a pretty simple way - it focuses on one backend and tries to do that well instead of abstracting over multiple backends.
Some of the things it supports today:
Repository: https://github.com/pragmaplatform/oxanus
Any feedback is appreciated!
i've been looking for a project to actually learn rust beyond the book examples and tutorials. wanted something practical that i'd use daily. ended up building a cli tool that pulls youtube video transcripts, stores them locally, and lets you full-text search across all of them.
the use case is simple. i watch a lot of technical talks and interviews on youtube and i can never find things again. youtube search only matches titles. i wanted grep but for things people said in videos.
for pulling transcripts i use transcript api. setup was:
npx skills add ZeroPointRepo/youtube-skills --skill youtube-full the cli takes a youtube url, pulls the transcript, and writes it to a local sqlite database using rusqlite. i use sqlite's fts5 extension for the search index. the workflow is basically:
yt-grep add https://youtube.com/watch?v=whatever yt-grep search "borrow checker"
and it returns matching snippets with the video title, timestamp, and surrounding context. like grep output but for youtube.
the performance is what made me fall in love with rust for this. searching across 900 transcripts takes 2ms. not seconds. milliseconds. the binary is 4mb. it starts instantly. no runtime, no jvm warmup. just a binary that does one thing fast.
i also added a yt-grep related command that uses tf-idf to find transcripts that are similar to a given one. wanted to play with the tantivy crate but realized for my scale sqlite fts5 with a bit of custom ranking was more than enough. tantivy would be cool for 100k+ documents but at 900 it's overkill.
the whole project is about 1500 lines of rust. the borrow checker only made me want to quit twice which i'm counting as a win. clap for arg parsing, reqwest for http calls. rusqlite handles all the storage. compiles in about 8 seconds on my m2.
First, let me clarify: I'm not a programmer, engineer, or anything like that. I'm just starting to learn, so this discussion is more about my ignorance, and I'd like to hear the opinions of actual programmers and engineers.
Another clarification: I'm Spanish, so please excuse me if I make any mistakes in English while writing this.
Does the software world depend on a single thread? Legacy software written before best practices were defined? Software with languages that didn't even support other languages? (e.g., the letter ñ in Spanish?) Insecure and unstable languages due to memory errors (I know these are also programmer mistakes).
I've been thinking that if everything were built from scratch, in languages like Rust, or perhaps Rust isn't even necessary (though still better), but rather modern versions of each language and best practices, could it be a better world?
Is this true, or am I wrong?
I've heard of some technologies that weren't even made with this in mind; they were simply made for another purpose, but due to popularity and massive growth, they got out of the creators' hands (like Javascript).
Hi all,
I’ve been working on celerity, a pure Rust implementation of ZMTP 3.1 for guys who want ZeroMQ-style messaging semantics without really depending on libzmq.
The main design choice is that the protocol engine is sans-IO. CelerityPeer owns greeting, handshake, framing, multipart assembly, and security state, while the optional Tokio layer handles TCP and Unix domain sockets. That split kept the wire protocol testable in isolation and kept transport issues separate from protocol state.
Current scope:
PUB/SUB and REQ/REPCURVE-RS for authenticated/encrypted non-local linkscel-cat CLI for local smoke testingA few implementation details:
NULL for local workflows, while non-local links fail closed unless CURVE is configured or insecure NULL is explicitly enabled.CURVE-RS path uses X25519 key agreement, HKDF-SHA256 for derivation, and ChaCha20-Poly1305 for traffic protection.DropNewest behavior.The API currently has two layers:
CelerityPeer as a protocol state machinecelerity::io with Tokio adapters and helpers like PubSocket, SubSocket, ReqSocket, and RepSocketI’ve been validating it with cargo test --all-features; the current test matrix covers handshake edge cases, CURVE flows, IPC authorization checks, queue policy behavior, and end-to-end Tokio socket roundtrips.
I’d be interested in feedback on:
I'm open to any other suggestions
So I'm trying smth new(for me*)and I wanted to have more opinion on: is rust the most optimal way to write on TEE, been seeing a lot of discussion on rust, microsoft removing C / C++ so would appreciate an advise from someone hands on this.
ESLint, Taplo, and Tombi are what's listed on the TOML git wiki, but: - eslint: I don't want to touch the javascript ecosystem - taplo: worked really well, but is now in maintenance mode, no 1.1.0 spec support - tombi: fixation on support sorting first, has shot down multiple suggestions before in the name of "interfering with auto sorting". Didn't even want to allow blank lines in toml files at one point, and only very recently did the dev allow you to disable sorting per schema.
As much as I don't like, tombi seems to be the only actively maintained toml formatter, is there an alternative?
I’ve been building Proxelar, a Rust CLI project, and over the last month it got a lot more attention than I expected. It also recently hit 750 stars on GitHub, which I’m really grateful for.
I also just got it published on Homebrew, which is a big milestone for me.
I’m not posting to promote it so much as to ask for advice from people here who’ve turned OSS projects into tools people actually use.
Right now I’m trying to think less about “what feature should I add next?” and more about bigger questions like:
I’d also love to bring in contributors over time, but I want to do that in a way that feels genuine and sustainable.
If you’ve been at this stage with a oss project before, I’d really appreciate any advice on what matters most next.
GitHub: https://github.com/emanuele-em/proxelar
Homebrew: brew install proxelar
Hi everyone,
I took a long break from programming due to some psychiatric challenges that made it difficult to focus, but I'm happy to share that I've recently found my way back.
I decided to return by learning something entirely new Rust. I picked up the Rust book and worked through it, though the process was slow at times. I was still in the middle of psychiatric treatment during this period, so my concentration wasn't always where I wanted it to be. Despite that, I managed to finish the book, which felt like a real milestone.
Shortly after, I went through a difficult depressive episode lasting a week or two, but I received treatment and I'm doing better now.
For the past day or two, I've started working on a small project where I'm exploring the `clap` and `image` crates. Progress feels a little slow, but I think that's probably to be expected it's been a long time since I've coded actively, I'm approaching programming with a fresh perspective, and I'm simultaneously getting familiar with two new libraries.
Is this pace normal? I believe it is, but I'd love to hear from others who've been in a similar situation.
Sharing this for others who are looking for protovalidate support in Rust using Anthropic's Buffa/ConnectRPC ecosystem.
For those new to CRPC and Buffa,
Anthropic made a ConnectRPC/gRPC library for Rust which uses views instead of native types like Tonic. IMO, this is superior (which is indicated by bench) but also closer to true gRPC.
Protovalidate is used to validate incoming requests (it can also handle arbitrary messages), so you don't have to program this yourself. It's flexible through CEL.
[EDIT: v0.1.0 is up]
I've been working on a schema language for full stack development for around nine months now, built with Rust.
I've always thought full stack development was overly complex. A system design on a whiteboard does not cleanly translate to code. Even something simple like a web page with a persistent counter can be thousands of lines of code/config across infrastructure, backend, frontend, and database schemas.
Tools exist to connect specific parts. OpenAPI and RPC-langs allow you to define a schema to make stubs for the backend and client. ORMs exist to allow your code to define database schemas. Infrastructure-as-Code reduces the amount of time you spend on a shitty cloud provider UI.
Trying to set up these tools puts you in environment hell, and really still writing just as much code to connect X to Y. Additionally, the toolset changes drastically depending on what language you use.
My language Cloesce is (as far as I can tell) the first language to replace the whole stack:
Additionally, Cloesce's ORM extends to cloud-native storage, meaning you can relate SQL tables to things like an Object Storage bucket.
Cloesce is implemented as a four stage compiler, and has a runtime state machine mostly implemented in Web Assembly. It runs on Cloudflare Workers, and currently compiles to TypeScript, but will some day include any client-side language and the full Cloudflare supported subset of backend languages.
I've spent my entire senior year of college thoroughly thinking through the theory, developer experience, and implementation. That is to say, it is not vibe coded.
A Cloesce Model which defines the database, backend, frontend, and infrastructure.
I'm hoping to garner some discussion. Could you ever see yourself using a tool like this? How can I improve my repo to make it more friendly to contributions?
https://github.com/bens-schreiber/cloesce
https://cloesce.pages.dev/
`WTX` now has support for `X.509` certificates and you can choose the underlying crypto project that verifies signatures.
https://github.com/c410-f3r/wtx
The following image was taken from a script that runs the `x509-limbo` testsuite.
Signature verification is just one of the several other checks that happen in chain validation but it is still possible to see the runtime weight of each backend.
The script is in https://github.com/c410-f3r/wtx/blob/main/.scripts/x509-limbo.sh if you want to run it yourself.
Squashing "Edge-Case" Bugs
UI/UX Refinement (Making it not look like a prototype)
Sound Design & Ambient Audio
Marketing & Getting actual eyes on the game
I recently was reflecting on these questions. For me it seems that the biggest problem is the last one. Marketing always seems to be the most difficult, especially when you don’t have any budget for marketing - money or time.
How about you, what is your biggest time-sink?
| Basically they banned the game from play store because they considered it too dark. [link] [comments] |
I'm currently in the ideation stage of making a 6-10hr game, and I already know I want to incorporate music that will resonate with the emotional tones of the game's narrative and action (with one song even being incorporated as part of the gameplay). I don't have much skill in terms of composing, and think I would prefer going the route of hiring a composer instead of taking additional time learning those skills or relying on free assets.
That being said, I've no idea what the ballpark would be in terms of a fair cost for a custom soundtrack, what working with a composer is like, nor do I know if there's a recommended stage in the production pipeline to start creating compositions.
I know I'm far from being at that point, but was curious if anyone had any insights or experience into this particular area?
EDIT: Not currently looking for offers. Please do not reach to me offering services.
For my game, I added the ability to export and import the save data that you can copy paste as a text string. https://i.imgur.com/POJmgf5.png
I've added some basic security features like obfuscation and a checksum so players can't just easily edit it, but now I'm second guessing if an export/import feature is even the right thing to add.
I'm planning to release the game on both web and Steam and the idea is that players who tried out the web demo can export their save and keep playing from there on the Steam version. This is a singleplayer game, maybe with some leaderboards, so I guess it's fine even if players "cheat" by copying the export data of other players.
But I'm also wondering if there are any other pitfalls I'm missing? Can this feature somehow blow up in my face down the line?
| They don't wanna be associated with a 18+ game because they don't wanna tarnish their brand, so what they're saying hentai it's bad but PRETTY MUCH FUCKING UP THE WORK AND INCOME OF SEMEONE WHO'S ALSO GOING THROUGH CANCER it's ok?.This is vile. [link] [comments] |
Hey there,
Does anyone have any reliable sources that provide detailed information on the current state of the market?
Or something about how to optimize your professional profile to be more competitive?
I am actually looking for a job as a 2d or concept artist and also with a decent experience (3 years in AAA production and 5 years in general) after months of searching I still can't get an interview, so maybe is the moment for me to understand deeply what's going on and how to navigate the problem and improve my profile.
Just a heads up, Behaviour Interactive in Montreal (Creators of Dead by Daylight) just laid off about 30 people. I know folks at the company directly. No direct evidence yet but a union organization group approached the building and handed out flyers a few weeks ago, and from the list of folks that were laid off it appears the majority of them were pro-union, as well as anti-generative AI.
| A group of unionized employees at Build a Rocket Boy (BARB), the studio behind the action game MindsEye, have initiated legal proceedings against the company. The workers allege that management installed “invasive” surveillance software on their devices without proper consent, violating UK data protection laws. [link] [comments] |
There's a lot of advice about when to announce your game, when to launch your Steam page, and when to start marketing. Most of it is too vague to be useful. Here is the specific framework I think makes the most sense based on how the platform works.
The core insight:
Steam rewards wishlist momentum, not just wishlist totals. A game with 2,000 wishlists accumulated over 3 months of consistent activity will often outperform a game with 3,000 wishlists accumulated in one week, because the former shows Steam that the audience is genuinely interested.
The timeline I would use:
6 months before launch: Open the Steam page in Coming Soon state. Do not announce it widely yet. Run paid ads to a very small audience ($5 to $10 per day) to seed initial wishlists and let Steam index the page. This matters because Steam uses early engagement data to calibrate what the algorithm shows you.
5 months before launch: Start your community presence. Reddit posts sharing development content. Short-form clips on TikTok and Reels showing development progress or interesting mechanics. The goal here is not viral reach. It is consistent presence so you have an engaged audience when you actually need them.
3 months before launch: Full marketing push begins. Prioritize Steam Next Fest if your timing aligns. Having a playable demo on Steam Next Fest can generate more wishlists in one week than 3 months of regular marketing.
6 weeks before launch: Contact streamers. Not huge streamers. Streamers with 500 to 10,000 followers in your specific genre. Smaller streamers have audiences that actually watch and actually buy. Reach out 6 weeks out because streamers plan their content in advance.
2 weeks before launch: Your trailer goes everywhere. This is not the time to introduce your game to new audiences. This is the time to remind the audience you have already built that the launch is coming.
Launch day: Do not leave the computer. Respond to every comment, review, and question within hours. The first 72 hours of reviews and wishlist conversions determine Steam's early algorithmic treatment of your game. This window matters more than any single marketing activity.
1 week after launch: Most developers go quiet here. This is a mistake. Post a "one week update" everywhere. Share the numbers if they are positive. Share what you learned if they are not. This keeps the momentum going and generates additional coverage.
That's pretty much it. What would you add or change to this timeline based on your experiences?
Spent the last few months watching what actually happens AFTER an indie dev sends out Steam keys - which turn into coverage, which get resold, which sit forever, which get claimed by what's basically a bot farm. Patterns are more consistent than you'd think. Sharing in case someone's about to ship their first big key campaign.
What the "legit coverage" pile has in common:
They redeem within 48 hours. A key that sits unredeemed past a week stays that way about 90% of the time. A fresh-keen creator is the one who'll actually play and post. Chasing the slow redeemers is almost always wasted effort.
They're verifiable across the platforms they claim. YouTube channel ID matches the claimed subs. Twitch channel streamed in the last 30 days. TikTok engagement ratios look human. If the math is off even a little, you're subsidizing a reseller.
When content drops, it ties clearly to YOUR game. Title mentions it, Steam link in description, tags accurate. That's the signal they're treating it as coverage, not filler for their upload schedule.
Their audience looks like yours. Open 3 of their recent videos - are the comments from real players in your genre, or is it "first", emoji spam, and two-word praise? Audience being real matters more than audience being big.
What the "never should have sent that key" pile has in common:
The "500k combined reach" pitch where 490k is an inactive TikTok and 10k is a YouTube with 200-view videos. Platforms aren't interchangeable. Multi-platform reach where one dead platform carries the whole number is a flag.
Email handles that match no public persona. Real creators are findable - a YouTube/Twitch handle tied to a mailable inbox. An untraceable Gmail that matches nothing is the key heading to the grey market.
Multiple "creators" redeeming from the same IP or device. If you have any tracking at all, repeat-device redemption across different claimed identities is the cleanest scam signal you'll find.
Ultra-generic praise with zero specifics. "Loved it, great game!" - no level, no mechanic, no character, not even a bug mentioned. They redeemed and bounced. A player who actually played will mention something weird or specific, even if their take is lukewarm.
Asking for a second or third key "for my team" before posting a single piece of content. Unless you've already seen real coverage from this creator, every extra key is a resale waiting to happen.
Obvious caveats: legit creators sometimes redeem slowly, sometimes email from a generic inbox, sometimes post vague praise. Single signals aren't evidence - stacks of them are. And 2026 grey market is weirder than it was two years ago; the resellers are getting better at looking legit.
What signals have you found most reliable for filtering real reach from the grift?
TLDR: I spent 2 years making my dream game. It sold 7 copies on launch day.
A little over two years ago, I decided to finally build my dream game: Paddlenoid. At that time, I wasn't into the indie gaming scene at all. I just needed a break from my regular work. Also, I thought that if you make a good, fun game, there must be some money in it, right?
I run a small software company that builds enterprise software. Our software isn't subscription-based, so I need to sell new licenses to generate income. I decided to spend a little less time on that company to work on the game, trading some income for freedom.
Honestly, I had no plan at all, only a vague idea of what I wanted to make. I went and sat down at a local co-working space because I thought that some external 'entropy' might help the process.
I wasn't going to use an existing game engine. Why? Well, I don't think it's as much fun as rolling your own.
I have this really old memory of playing a game that mixed Pong with Arkanoid. It must have been on the C64, but I can't remember the name. I was fascinated by this idea. Also, I really liked the idea of making a coop first game (a paddle on each side). My GF and I love coop games and there aren't nearly enough quality ones!
Before long, I had something resembling a game: two paddles and some blocks. Only it was really ugly. This is where I met my artist. She was a designer working at the same co-working space, and we decided to make the game together. She would be the art director, on a contract basis, of course.
Before long, she had created some really cool pieces that impressed me, and I knew we were on the right path.
Well... as you may know, building a game takes a long time. And if you're both inexperienced, you're going to have to do a lot of work and rework.
My artist and I quickly realized that I wouldn't be able to afford her on this project. But since we're both very bad at giving up, we worked out a deal where we'd trade "hour for hour." This isn't something I would generally recommend, but it worked well for us. I helped her build out her company website, and she helped me design Paddlenoid.
I've held a guitar before. Why wouldn't I be able to do my own SFX and music?
This was an adventure all on its own, which is why I already made a Reddit post about it :) But what's not in that post is the constant self-doubt and anxiety that came with it.
I was constantly putting off working on sound effects. It's remarkably hard to imagine what a "sticky paddle" sounds like, or what material your ball should be made of.
It took so much time to try different things: speeding up, slowing down, layering, tweaking volume. In the end, I built an entire "poor man's FMOD" to test sounds more quickly.
Making music was a challenge too. I probably made well over an hour of music before settling on the 7 minutes that made it into the game.
The artist asked about the setting of the game...
That was all the prompt I needed... I've had this sci-fi idea for a long time, and I thought it was simple enough. But when I tried to explain it, I realized it was a bit abstract and lacked some definition.
It became a personal challenge to tell it properly, without being obnoxious. That became my white whale...
The story has two endings. It's very layered and I wanted to tell it as succinctly as possible in all skippable cutscenes. Also, you'd still need to be able to enjoy the game even if you skip everything.
All in all, I spent way too much time trying to tell that story. I'm happy with how it turned out, but my next game will definitely not have a story told like that.
Building this game had many twists and turns, mainly because I hadn't thought deeply enough about what the game should be.
Originally, we planned to release on mobile first, then Steam. Since I already had a Windows build, a Steam release seemed easy.
But as I learned more about indie dev and marketing, I noticed a strong bias toward Steam. Releasing there first started to make more sense.
Around November 2025, the game was nearly finished, and we could have soft-launched on Android. But by then, I was deep into "how to market your game" content and became convinced we should aim for a big Steam launch. Surely I could hit 7K wishlists, right?
I'd already started building up X, Bluesky, and Instagram accounts, but growth was slow. Still, it felt like going viral was possible with Steam Next Fest and a well-targeted paid ad campaign.
The plan: join Next Fest in October, build momentum with ads, and release with a discount shortly after, no matter if we had 5,000 or 12,000 wishlists.
Entering Next Fest is easy. Paid ads... not so much. I wrote another Reddit post about that..
Suffice it to say: there are no shortcuts. Building an audience from scratch takes a lot of time. And the math on ads is brutal: If your game doesn't have broad appeal or generate high revenue per player, paid ads just don't seem to make sense.
Launching with a discount failed. Unfortunately, a discount is something you need to plan at least 72 hours in advance... When it came time to launch, it was too late to setup a discount.
Which brings us to the numbers:
At launch, we had 240 wishlists. On release day, we sold a whopping 7 copies.
I love Paddlenoid, and I loved making it. Financially, it set me back, a lot, mostly because I could have been working for clients or my company.
It was an incredibly stressful period. But I liked how it got me closer to the people around me. My GF, my friends, everyone is curious about the game you're making and they all have fun ideas.
I also discovered a creative side of myself. At first, I was okay with things looking or sounding bad because I was afraid of what people would think of my honest effort. Now, I feel a lot more confident as a creator.
Also, I learned a lot. I discovered amazing devs making inspiring things. I'm starting to understand the market and the process.
And that's why, even though it was stressful and financially painful, I'm really excited at the prospect of starting my next project!
Paddlenoid is definitely one of the projects I'm most proud of. Building it has been a personal Mount Everest.
If you're curious, here's a link to game:
https://store.steampowered.com/app/2789390/Paddlenoid/
Hi! I’m a solo developer working on my first commercial game, an action RPG (a niche that’s quite difficult to monetize).
Since I started working on the game, my wishlist growth was really low. In fact, until just a few months ago I was barely reaching around 800 wishlists.
Sometimes I even doubt how I managed things, but then suddenly, in just a few months, I’ve managed to go past 5,000.
I’d like to share what I think were my biggest mistakes, and what seem to have been the things that worked:
1. Joining showcases too early
From the beginning, people told me the game looked good, so I decided to submit it to my first showcase. I got my first 400–500 wishlists, but the truth is the game was still in very early stages, so the traction was quite low. Now I can’t go back to those showcases, and I’m sure that if I had waited, the results would have been much better.
2. Not understanding early marketing properly
This is closely related to the previous point. Early development should definitely be shared and promoted, but mainly with the goal of getting honest feedback.
I think it’s important to understand what kind of actions make sense at that stage. Social media, devlogs, etc. are good channels, but it might not be the right moment to target your final audience yet, since the impact will be much lower than when the game is more polished, and you may lose momentum. That’s why showcases, festivals, and Next Fest (where you only get one shot) are better saved for when you truly have something you’re proud of.
3. Overusing the “solo dev” angle
The reality is that the end user doesn’t really care if your game was made by a solo developer. In my case, it even caused some issues:
⸻
Given all these mistakes and seeing my wishlists stagnate, I seriously considered dropping the project. But luckily, over the last few months I tried a few new things that helped things start growing again:
⸻
I’m sure none of these actions alone are game-changing, but together they’ve significantly improved my wishlist growth.
I know that for many people 5,000 wishlists might not sound like much, but for me, as this is my first commercial game, it has given me a lot of hope. For the first time I can realistically see reaching the 7,500 wishlist benchmark that is often recommended for a successful Steam launch.
I hope my experience can be useful to other developers, and if you’re just starting out like me, keep in mind:
I’d love to hear other experiences as well. If you feel like sharing, I’m sure we can all learn from them.
Spent forever debugging why my input didn’t work on an incredibly complex system and it turned out to be a string input typo. I read it at least 5 times looking for mistakes but never ONCE caught the error.
I want a refund on my brain.
| submitted by /u/finallyanonymous [link] [comments] |
Hi all, it's been a while since I last introduced tmplx :)
But now we have a working site that's built on top of tmplx!
If this is the first time you're hearing about the project:
tmplx is a framework for building full-stack web applications using only Go and HTML. Its goal is to make building web apps simple, intuitive, and fun again. It features:
Developing with tmplx feels like writing a more intuitive version of Go templates — where the UI magically becomes reactive.
I'd be grateful if you could try it out, play with the demos, or even build something on your own!
Follow-up to my earlier posts on adaptive hedging. Finally cleaned up the code enough to share.
https://github.com/bhope/hedge
Drop-in http.RoundTripper, also a gRPC interceptor. Default is one line:
go
client := &http.Client{Transport: hedge.New(http.DefaultTransport)} How it works: per-host DDSketch tracks latency, ~35ns per update, 30s window. When a request goes past the current p90, it fires a backup. Winner wins and stragler gets cancelled.
What makes this unique is the budget. Token bucket capped at 10% of RPS. Without it, when your backend goes bad, hedging doubles the load and makes everything worse (So best to stop hedging when things are on fire)
Benchmark (lognormal backend, 5% stragglers, 50k requests):
The static number is kinda cheating because I picked it after seeing the distribution, you probably don't have to.
One thing I haven't done a full fledged testing in real production environments is the LLM inference. TTFT tails aren't great - queueing + cold replicas + KV cache stuff. If you run a Go proxy in front of vLLM or TGI, I'd love to know if this helps.
What I am looking for:
WithEstimatedRPS probably the library should figure it out?Please feel free to ask me anything. Especially interested to know what you think about the DDSketch choice (I went back and forth with t-digest).
Hey everyone,
I got tired of choosing between raw Prometheus text and a full Grafana stack just to check on a Caddy server, so I built Ember. This is a zero-config TUI dashboard (using Bubble Tea) with live RPS, latency percentiles, status codes, certs and upstream health. Also ships JSONL, Prometheus daemon and one-shot status modes for cloud env.
If you'd like to see it in action, there's a GIF showcasing it in the repository.
https://github.com/alexandre-daubois/ember
Feedback and issues very welcome!
Here is the link to previous reddit thread: https://www.reddit.com/r/golang/comments/1spj2bu/i_built_a_container_from_scratch_in_go_to/
An update to the above:
So in the first Part I implemented namespace isolation while building a container from scratch. Now I have also implemented resource limits using cgroups and file system isolation using chroot.
I implemented the next level of isolation of process in the container by implementing cgroups and chroot. It runs in its own Alpine Linux filesystem, which is easily replaceable with other distros, just like how we use it for Docker.
The container now has hard limits on the amount of RAM it uses and the number of processes it can spawn.
Although it is not as feature-full compared to Docker or production-ready, it is a great experience, and I got to know a lot of caveats and drawbacks of the raw implementation and how Docker handles it under the hood.
Do give a read here: https://blog.iamvedant.in/containers-are-not-magic-cgroups-and-chroot-from-scratch
I would love the feedback on what went good, if it helped you and what can be improved.
Quaternions usually considered hard and complex but they are king of rotations crucial for games and maths to understand quaternions, if we go the academic route it will sound like a waiter using long, expensive words to explain a simple carrot salad.
We do not need any of that here. That is why we will not use academic language and we will start with imaginary numbers.
Imaginary word sound like fiction, things that do not exist in our world right?
But what exactly does not belong here? If you multiply two negative numbers (two number with subtraction signs) together and the result is still a negative (subtraction sign) that just does not happen in our world. This is imaginary.
Now, suppose we have a number like the square root of -1. When I say underroot -1 what am I actually trying to say? I am saying find a number that makes -1 when you multiply it by itself. But how is that possible because in this world the law of negative X negative positive follows? It is totally impossible and that is exactly why we call it an imaginary number. Now see the following calculation.
This kindergarten calculation has a significant role in quaternions. yes it is simple, but it is very powerful, and the entire quaternion is built on it. I have shared this calculation right now because we are currently discussing imaginary numbers. I will not talk about this calculation right now, but we will discuss it further ahead, where you will find out that this is the foundation of quaternions.
So, now we are going to step away from imaginary numbers and jump right into quaternions. We'll use the exact same formula you usually see written in textbooks to represent them. You know the one: xi, yj, and zk. And if you come from a computer background, you've probably seen a w attached to that. But if we just talk about the x, y, and z, those are simply the physical axes you see on a standard 3D graph. Quaternions add imaginary numbers directly to these. Let's look at how these imaginary numbers actually interact with our axes. If I take the term xi, the x is our physical axis, and the i is our imaginary number. And here, that imaginary number literally just means 90 degrees. So, what does xi actually mean? It means a strict 90-degree turn.
But hold on. How did i suddenly become 90 degrees? Wasn't it supposed to be the square root of -1? How did it jump to being an angle? You won't find the answer to this in high-level physics. You actually find the answer right back in simple, basic kindergarten math we have done previously. Let's break that down right now. To do this, we just need to go back to our standard graph. We take our point xi and place it on the y-axis. Now, let's say we multiply one more i into it. That means we are taking xi and multiplying it by i. Physically, this means we are adding 90 degrees and another 90 degrees menas i2. The answer hits exactly 180 and i2 =180 degree then i =180/2 means 90 degree. So what does this mean?
If we multiply imaginary numbers together, it directly picks us up and physically drops us on the exact opposite, negative axis (If we start on the positive x-axis, we flip straight to the negative x-axis) and the exact same mechanical thing happens with all the other axes if we start from them.
The Strange Co-Dependence of i and i2:
Now, let's assume for a second that this i2 just doesn't exist. Should that have any effect on standalone i? Normal human logic says that i should be independent. It should be completely whole on its own. That means i should totally exist even without i² being a thing. Because obviously, i² can never be formed without having an i first.
So, the real question pops up. If we assume there is no such thing as i², can a single i still perform that 90-degree turn? The answer is hidden inside some very strange mathematical logic. Because actually, whatever value or identity i has, it comes entirely from i². If there is no i² in the math, then a single i simply does not have any physical value of its own.
This is a kind of math that literally moves backward instead of forward. It works from back to front. Think about it like this. In normal life, 1 and 1 together make 2. If your base number 1 isn't there, then the number 2 can never be formed. But in this specific game of imaginary numbers, the rule runs completely in reverse. Here, the math clearly tells us that if i² (which is supposed to be the result) doesn't exist, then i (which is supposed to be the base) will not exist either.
PART 2: 3D Quaternions
| I wanted to share this small milestone. After months spent talking about it during my livestreams I finally have multiple matches going in my Tennis game with Anime girls! The objective is to bring to life a different kind of tennis rarely seen in videogames.. the junior/college environment, far from the fanfare of big centre courts and where you're gridinding your way in smaller tournaments.. and having other matches running around you was an essential part of the entire vision. Tech stack: Rust custom engine with Vulkan via Ash.. target platform PC/Steam Deck [link] [comments] |