Saturday, April 4, 2026
a5cd1024-74d5-4e63-99b5-a90521bc1c4b
| Summary | ⛅️ Breezy in the afternoon. |
|---|---|
| Temperature Range | 11°C to 18°C (53°F to 65°F) |
| Feels Like | Low: 48°F | High: 63°F |
| Humidity | 74% |
| Wind | 16 km/h (10 mph), Direction: 253° |
| Precipitation | Probability: 88%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:31 AM / 🌇 07:11 PM |
| Moon Phase | Waning Gibbous (58%) |
| Cloud Cover | 42% |
| Pressure | 1015.65 hPa |
| Dew Point | 50.56°F |
| Visibility | 6.02 miles |
vintage-schematics is a tool (and library) for converting Minecraft schematics to the Vintage Story WorldEdit format. This allows you to bring your Minecraft structures into Vintage Story!
Here's what a Minecraft village looks like in Vintage Story!
There's a desktop app for Windows and Linux (using egui) as well as a webapp (using wasm-bindgen which I love and webpack which I don't).
Try it out on your own Minecraft builds! It supports the Litematica, Schematica, Sponge, and Vanilla "structure file" formats. Let me know if you find any missing blocks (very likely) or bugs (hopefully unlikely)!
It's open source, hosted on GitLab.
https://i.redd.it/dzk5l5ia72tg1.gif
Sharing files can be a frustrating problem. I often dislike Apple software, but AirDrop is often convenient for a quick transfer.
Tools like croc and LocalSend help, but they have their own problems like relying on relays for the actual data transfer or only working if both devices are on the exact same Wi-Fi network.
Thanks to iroh (https://github.com/n0-computer/iroh), I built an app in Rust & Flutter that can share files directly between two devices anywhere in the world. Just exchange a simple 6-digit code, and it establishes a direct, end-to-end encrypted connection between the devices to send files. iroh takes care of all the complex hole-punching magic needed to get it to work correctly across NATs.
It's still in the very early stages, but I am happy that the Rust ecosystem is so mature. I was able to build the core in Rust without having to worry about the low-level (and very complicated) details of connecting devices together (thanks again to the iroh developers!), and then link it effortlessly into a Flutter UI.
I have attempted to build this a few times over the years but always gave up due to the sheer number of things I needed to focus on to get it right! I am happy I could do it now. It still needs a lot of work but I had to share it here.
Do try it out and share your thoughts!
I've been getting into cybersecurity, and that means that I have done some ctf challenges. So when I got inspired by "terminal.shop" (the ssh coffee shop made by teej_dv and ThePrimeagen) and i wanted to build a "ssh application" I decided to build a ctf platform (like ctfd but in the terminal, over ssh). So this is what I have been building for the last 2~3 weeks and I finally feel like it is in a stage where I can share it and actually get useful feedback in order to continue improving it.
The github link is: https://github.com/d-z0n/SSHack and there are some basic instructions for setting the server up in the readme. I have a lot of plans to improve this further, so see this as a first draft (it should still be enough to host a simple ctf for fun with friends or at school in its current state)
I have also setup a really simple demo ctf, to access it run ssh "ctf.dz0n.com" -p 10450 (port 10450 is used by random for my ngrok tunnel, actual port is 1337 by default but this is configurable).
Anyways, if you are hosting a ctf, feel free to use this as your platform and please create an issue on github if you experience any problems / have any questions. In the meantime I will continue development. Happy Easter!
Hi rust people - this isn't _necessarily_ a rust specific question, but give that's my current focus I'd love to hear it.
What are some examples for you guys of great, high-usability CLI applications that include some level of "flair", like progress bars, spinners, etc.
I'm not talking about full TUI apps like gitui or claude code or codex, more just one-off console commands that give you a good idea of what's going on (download indicators, loading spinners, etc) that you feel are just delightful to use. Bonus points if they are rust based (because then I could lean on them for understanding patterns or whatnot) but any language is fine. Something analogous in the js realm might be `yarn`, it shows a bunch of parallel progress and so on.
Obviously not everyone loves progress bars and spinners, so if that's you, this probably isn't a great question for you, but if you DO love that stuff, please let me know your recommendations.
(edit: yes I realize now that CLI interface is like saying ATM machine, but I can't edit the title)
I've been using Rust for years and I've encountered a new problem that sure feels like an unnecessary language limitation, where adding a constraint to a trait breaks another trait that's derived from it, for example:
trait AddToI32 where Self: Sized, Self: Add<i32, Output = Self>, i32: Add<Self, Output = Self>, // this line causes an error {} trait Subtrait where Self: AddToI32, i32: Add<Self, Output = Self>, // this line makes the error go away {} My thinking is that the offending trait bound in AddToI32 should result in the bound i32: Add<Self, Output=Self> being propagated to Subtrait, but that's not what happens. I can kind of see the logic that adding a constraint with Self on the left can't magically add a constraint with i32 on the left, but it would be so convenient!
Can anyone offer insight into what I'm trying to do should or should not work?
Playground link here.
While implementing benchmarking and outlier detection in Rust, I noticed something interesting, when the data is very stable, even minor normal fluctuations were flagged as outliers, the standard algorithms IQR, MAD and Modified Z-Score became too aggressive.
This is a known problem called Tight Clustering, where data points are extremely concentrated around the median with minimal dispersion.
The goal of the project is to detect true anomalies, like OS interruptions, context switches, or garbage collection, not to penalize the natural micro variations of a stable system.
Example
IQR example, in very stable datasets:
IQR, where the fence is 1.5×IQR, the Upper Bound for outliers would be:
6.004+(1.5×0.004) = 6.010 ns
A sample taking 6.011 ns, (only 0.001 ns slower), would be flagged as an outlier. This minimal variation is acceptable and normal in benchmarks, it shouldn't be flagged as an outlier.
To reduce this effect, I experimented with a minimum IQR floor proportional to dataset magnitude (1% of Q3), tests showed good results.
IQR2 In very stable datasets:
Now, the Upper Bound becomes:
6.004+(1.5×0.060) = 6.094 ns
A sample taking 6.011ns would NOT be flagged as an outlier anymore. The detection threshold now scales with the dataset magnitude instead of collapsing under extremely low variance.
I don't know how this is normally handled, but I didn't find another solution other than tweaking and altering the algorithm.
How is this usually handled in serious benchmarking/statistical systems? Is there a known approach for tight clusters?
I had a bit of a weird insight just yet. I am curious to hear of I am just mad :)
If you take the safety of Rust and combine it with the safety of OS/400 (for the younger rustaceans, OS/400 was the OS of a range of mini computers by IBM that was without a doubt one of the most well designed and safe operating systems humanity has ever known, and it lost commercially from Windows and Linux), wouldn't that be heaven?
(Just a thought for the weekend, don't take it too serious)
Orthotope is a Rust allocator library with:
16 MiBIt is aimed at allocation-heavy workloads such as ML inference, tensor pipelines, batched embedding or reranking services, and other high-throughput systems.
In the current local run, Orthotope was:
- about `1.5x` to `13x` faster on same-thread hot-path reuse workloads
- about `1.1x` to `3x` faster on `embedding_batch`
- about `3x` to `4.5x` faster on `mixed_size_churn` against `jemalloc` and `mimalloc`
- about `3x` faster on `long_lived_handoff`
In my day job I am a fullstack dev, who usually ends up with some sort of Angular/React/Svelte/Vite Frontend and a Spring/Quarkus backend. One thing that always annoyed me is the separation this kind of (Thin)-Client/Backend approach caused. Not only will it enforce some sort of dedicated Layer between your UI code and domain logic (Usually: Domain <-> API Layer (DTOs) <-> View-Model) but also separate the team into two: Backend and Frontend. A technical cut in the team that is absolutely not necessary but usually required due to the requirement that a developer MUST know these two stacks.
So I always eyed over to projects that aimed to provide a more unified approach. Back then when SSR was very popular with PHP/ASP.Net, and more recent and modern approaches using Blazor, Next and of course Dioxus (and Leptos).
What fascinated me about Blazor back then was the approach to transparently hide the communication layer between UI Code and Backend-Code, without enforcing pure SSR and at the same time seamlessly integrating the strengths of Browser based rendering + using just one programming language and basically treating the UI as an Adapter in the architecture instead of a whole other thing.
Now nobody wants to write C#, unless they are forced to, but Rust is a different story, so Dioxus promising all this + more is a pipe dream that finally looks like its getting closer.
With the recent developments in 0.7, with hot-reloading, rsx! syntax, asset! integration, excellent tooling, it finally looks more and more production ready.
So I took it upon myself to finally try it in one of my side projects. And I must say: No, it does not spark joy yet.
Now this comes to no surprise. This conclusion was reached before, even most recently (There was a discussion 3 months ago), most notably FasterThanLime also came to this conclusion. Now, I have very little experience with the Dioxus eco-system, so most of my problems might already be solved, so I am looking forward to your responses.
I used React a lot, notably MUI. Now, comparing very young and small component libraries to a behamoth like MUI is not Fair, but every time I have to switch from MUI to a different library, it feels like going back to the beginnings of web development. MUI makes it very effortless to create nice looking (opinionated, but thats not a bad thing) UIs. A11y is mostly integrated, and the API is very well designed, hiding away a lot of noise, that you usually see.
In Dioxus you usually use Tailwind. Tailwind is very flexible, but you end up annotating your components with a shit ton of classes and in the end, you end up with a lot of inconsistent and noisy code with lots of duplication. To be fair, there are projects like daisy-rsx and dioxus-shadcn, that take some of it away, but the core problems remain. I just dont have the time to deal with this anymore.
So, its not Dioxus fault, but this eco-system needs a really good component library first, before it can be reasonably adopted by either the industry or me, with my very little freetime.
Actually, I was surprised how "well" it worked. The syntax was a little weird. Notably, that it required me to provide Components exactly named like the routing paths. But it seemed very inflexible, when all I wanted was to extract a query param, or a url segment to do localization. It was the same deal with setting the url in the router, when all I wanted was to replace a query param.
I wished my toy project would have given me more opportunity to play around with other features, more notable the fullstack one, deployments on multiple platforms etc. But I save this experience for future pet projects.
While I think, that Dioxus does not spark joy yet, I dont think its Dioxus's fault. What it needs is a mature eco-system, but I think the framework itself, nailed most of the basics.
So, what do you think?
I tried to find some libraries that werent "entirely" vibe coded. Unfortunately, I could not find any that was not partly vibe coded.
Among these I found: * https://crates.io/crates/dioxus-tw-components * https://crates.io/crates/daisy_rsx * https://crates.io/crates/dioxus-iconify (Vibe coded, but I like the idea) * https://crates.io/crates/dioxus-free-icons
I know this is one of the most covered topics in Rust, but the abstraction in most tutorials hid too much for a systems programmer coming from C/C++ to be able to grasp the concept.
So I dug around the standard library, and this blog post teaches lifetimes from ground up from my findings.
Please read and comment!
Hi r/rust.
I've been working on this personal project for a while now.
I'm a control systems engineer / automation engineer by profession. I regularly work on hardware (PLCs) that was installed during the 1980s / 1990s. Obviously this legacy hardware requires legacy software for programming, fault finding etc.
The most common legacy PLCs that I'm work with regularly are Siemens STEP5 / S5 PLCs (first introduced in 1979). Honestly, they just don't tend to fail so there are still way more of these chugging along in 2026 than there should be.
My biggest issue with working with them is the software environment. The official Siemens package started off as a CP/M program, and then transitioned to MS-DOS, and the last version released supports up to Windows XP (as long as it's running on a single core CPU, or XP is booted in single core mode). It uses some add-on packages for configuring communication cards etc. - but these are still the original CP/M binaries, being emulated in DOS.
There are third-party alternatives to the official Siemens software that seemed to spawn in the 90s as MFC applications but most of these feel clunkier than using the original DOS program.
So I began working to implement my own re-imagining of the DOS software when I started learning rust. My primary development environment is Linux, however I also want to run this on Windows.
I've covered the file formats used by S5-DOS, and implemented around 80% of the serial protocol used for communicating with the actual PLCs.
I struggled to settle on a GUI framework to use but began to experiment with GTK4.
I only took a cursory glance at things like Iced etc. but none of them seemed (or looked) easy to implement a tree-like navigator (which seems to be the only way I can imagine navigating between a folder / block structure, where the block could be nested at the n-th sub-folder).
When it comes to editing the actual STL (loosely, the Siemens S5 equivalent of assembly), I'd like to implement syntax highlighting, auto-formatting / alignment etc. along with line numbers (or offset address of each line etc.)
Primarily I'm only really concerned with being able to edit and compile (and debug) the STL code, but the other methods of programming PLCs are visual languages like LAD (ladder - designed to mimic an electrical schematic for relay logic) or FBD (function block diagram - AND gates, OR gates etc.). If I decide to implement these at a later date I'd need some way of drawing boxes / shapes in an editor widget of some kind, along with adding text fields to input addresses above or below these graphical elements.
I started throwing together a GTK4 application using the Relm4 crate but the GUI development stalled while I was focusing on researching and implementing the AS511 protocol (the serial comms) for my underlying library.
What i've developed so far runs great on Linux, for the small portions I've implemented (which is mostly just navigation and editing of metadata). It runs on windows but crashes or freezes randomly, which I can't reproduce on Linux (admittedly, I haven't looked into it that much at this point).
Now I'm wondering whether I should switch to a different GUI framework before I go any further.
I have uploaded some screenshots of the original DOS program, and my partial GTK implementation here: https://imgur.com/a/Udfkbx6
TLDR:
Are there any other GUI frameworks I could be using in 2026, to implement a basic, but modern feeling IDE (with syntax highlighting, tabbed windows, tree-like navigation between the different elements of the project's heirarchy) ?
Hey everyone,
I’ve been working on a browser engine called Aurora, focused on a GPU-first rendering pipeline using wgpu — no Skia, no Chromium just wgpu and vello.
I recently reached a milestone where my custom GPU painter can render a static Google homepage fixture:
To be clear, this is not a full browser yet:
But it’s the first time I can render a real-world page structure end-to-end through my own pipeline, which feels like a big step.
Next steps:
Would love feedback, especially from anyone who’s worked on rendering engines or GPU pipelines.
This was the page i used to test :
<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Google</title> <link rel="stylesheet" href="styles.css"> </head> <body> <header class="topbar"> <nav class="top-links"> <a href="#">Gmail</a> <a href="#">Images</a> <a href="#" class="app-link">Apps</a> <a href="#" class="sign-in">Sign-in</a> </nav> </header> <main class="hero"> <h1 class="logo" aria-label="Google"> <span class="g-blue">G</span><span class="g-red">o</span><span class="g-yellow">o</span><span class="g-blue">g</span><span class="g-green">l</span><span class="g-red">e</span> </h1> <div class="search-shell"> <span class="search-icon">Q</span> <input class="search-input" type="text" placeholder="Search Google or type a URL"> </div> <div class="actions"> <button>Google Search</button> <button>I'm Feeling Lucky</button> </div> <p class="tagline">Static Aurora fixture for Google homepage rendering.</p> </main> <footer class="footer"> <div class="footer-row"> <span>United States</span> </div> <div class="footer-row footer-links"> <a href="#">Advertising</a> <a href="#">Business</a> <a href="#">How Search works</a> <a href="#">Privacy</a> <a href="#">Terms</a> <a href="#">Settings</a> </div> </footer> </body> </html> https://github.com/jams246/i-built-i-built-builder
I've been seeing a lot of:
I built ...
posts here recently and I realized that the real missing piece was not another CLI tool, but a tool for building tools that are built to be posted as "I built" posts here on reddit.
So what is this tool? I'm glad you asked!
I built I built builder is a CLI that scaffolds Rust CLI tools whose natural end state is an "I built" post. With I built I built builder, you can build a tool, build the README, build the release workflow, and build the post explaining that you built the thing that I built I built builder helped you build.
In other words, I built I built builder to help people say "I built" with more consistency, velocity, and mechanical sympathy.
The idea is simple: instead of building a tool and then separately writing about how you built it, I built I built builder helps you build the thing and the "I built" around the thing you built.
Example:
i-built-i-built-builder --name tomlshave --solves "trimming whitespace in embedded TOML snippets"
That command lets I built I built builder scaffold the tool, build the docs, and build the "I built" narrative around why the built tool needed to be built.
Anyway, I built I built I built builder, and now I’m posting that I built I built I built builder so that other people who build tools can use I built I built builder to build better “I built” tools and better “I built” posts about the tools they built.
Feedback welcome. Validation preferred.
As title says, What are some tools you have as a Rust dev that make the DX better?
I currently use these:
cargo-zigbuild for cross compiling.
cargo-edit mostly for the set-version command and bumping versions.
cargo-audit for checking deps for security vulnerabilities.
want to know if there are cool things out there that I am missing. Thanks
In retrospect, I thought it would be easier for me and my partner to just use the same account as it’s the two of us at the studio.
I naturally put our studio name as the handle, but in seeing everyone here seems to use a personal account. Now, my personal account name is something similar to delicious meat, which I do not love.
Should I just start a fresh new one and have like 4 accounts or should i just keep this one?
I made a game called Hidden in Plain Sight for the Xbox 360 Indie Games platform.
I subsequently ported it over so that it works on the Xbox One as part of the Creators Club or something like that. NOT XBLA. That was like 8 years ago. I haven't touched it since, and it has been a consistent (albeit small) revenue stream. That is today, it's not a totally dead game... people are buying it and presumably playing it every month.
https://www.xbox.com/en-US/games/store/hidden-in-plain-sight/9PL5QDVRX9SR
Within this last week, I was contacted by two separate people with the same error. The game is failing saying "Error signing into Xbox Live".
Did something change recently on the MS backend that would start causing this error? As I said, I haven't touched anything, but it suddenly has stopped working for at least two people.
Any ideas?
A lot of game mechanics sound simple on paper, movement, AI, basic interactions but, once you actually start breaking them down, they get complicated fast.
For example, something like “a car follows a track” seems straightforward. But then you run into things like handling sharp turns, adjusting speed so it doesn’t look unnatural, avoiding collisions, and dealing with edge cases where behavior suddenly breaks.
It turns into a chain of small problems that all affect each other.
Is this just the nature of game development, or is it usually a sign that something is being overdesigned or approached the wrong way?
Spent a fair few hours tweaking and smoothing out Movement & Controls for my Main Character...
Movement code is now over 800 lines... with most of the basic core mechanics in place... how much did your character have once you were finished your game?
Some important info:
-I have most interest in being an artist for games, but would like to start making indie games (meaning I should learn the basics of most game dev roles). I think programming, writing, and designing things like puzzles or mechanics could be fun, as well.
-I have no idea how to code, and have never had any real interest in compsci before realizing this is what I want to do. (also, what is the best coding language to learn, as a wannabe game dev?)
-I exceed in school, and am especially good with writing, arts, and math (as well as a general passion for video games), hence why I feel I would be a good fit for this sort of career.
To clarify my title question a little, I mean like, what should I be doing right now to better prepare me for a job in this industry? What type of education (ideally in Canada or Australia) should I be looking towards? How does one even GET a job in game development..?? This is definitely my dream career, but in my view, it seems practically impossible to be anything more than an indie dev.. (@__@;)
| Our NVIDIA GDC 2026 session recordings. [link] [comments] |
I’ve had a idea where guns and melee are in the same game. I recently told a friend about my idea for the game and he told me that it’d be horribly unbalanced.
My initial idea was to make reloading and rechambering processes manual so that guns (while also having guns have one-shot headshots) would be balanced with melee but he told me that it’d wouldn’t be fun and that the guns would still end up being overpowered.
Is there any way to actually balance guns with melee without making one too tedious or overpowered?
| I spent about 30 hours reverse-engineering the code of Silksong, one of the most successful Unity games ever. And found some genuinely impressive (and aggressive) optimizations. Highlights:
Full Video: https://www.youtube.com/watch?v=eC9bIelizlw [link] [comments] |
I am feeling guilty about this. I have game projects I put together, half done etc. The real enjoyment I get out of the hobby is making the mechanics work well and the project structure organized. But the actual "game" part I am not as fond of.
I will spend hours/days perfecting the best way to do something but never really finish out the game once I get the mechanic to work.
Does anyone else feel this way? How to reconcile this with an actual game project??
What does a game designer actually do?
Soft Qualities
Hard Skills
Level up together with the Masters
Haven't played as much games as the Designers?
Q&A
On game forums we often see players fiercely criticizing designers, does being a designer require strong mental resilience?
How does miHoYo's character design process work from nothing to something?
Main takeaway
This is from the 2026 campus recruitment talk when miHoYo combat designer 鸡哥 (Jige - Aquaria) went to Tsinghua University to give a special talk on game design.
Watch the video for the full presentation, above is just a small summary.
Eng Translation: https://youtu.be/SHwHdM3nKPI (by SentientBamboo)
Official Upload: https://www.bilibili.com/video/BV1Td9TBqEqD
Hey there,
I've worked on a Demo for 6 months full time, I had no programming background, but I've been working as a 3D artist for a while before that. So in that 6 months I've learned a lot and established a good base for the game.
It's my first game as a solo dev so, I don't have any prior knowledge of if my game is doing good and if it's worth continuing full time for at least another 6 months.
At first without any big promotions and no demo, some reddit post here and there I got to around 35 Wishlist in 2 weeks.
Then a month later there was a festival I was able to get into and got around 350 Wishlist there.
Then a month after that I released my demo and got lucky... a big streamer played my game and getting around 150k views on his video, some other streamers played here and there and liked it. But the conversion to Wishlist is really low bringing me today with a total of 1100 Wishlist.
The demo has a median playtime of 33 minutes and going up because of my updates.
The feedback i got from streamers playing is that it's either a 9/10 or Meh 5-6/10
I plan to go into the June next fest, but I don't know what to expect of that.
What do you guys think?
Should I put another 6+ months to finish the full game?
If you had my data would you do it?
I have a very large code base where the dependency graph looks more like a plate of spaghetti. The result is that touching a file often triggers a lot of downstream deployments because many package include a lot of unrelated stuff.
Ideally, I'd like something that can identify functions that don't have in package dependencies that I could split out into different packages. Or something that could identify thin dependencies on other packages that could be severed.
Sexta-feira Santa, eu estudando Go, minha irmã chega reclamando do trabalho.
Ela usa RedTrack pra tráfego pago e passa o dia inteiro fazendo a mesma coisa: abrir mil abas, aplicar filtro, esperar carregar, voltar, filtrar de novo… só pra descobrir se uma campanha tá dando lucro.
Eu falei: “isso dá pra resolver com um comando no terminal”.
Ela riu. Eu abri o VS Code.
Algumas horas depois, nasceu um CLI que conversa direto com a API do RedTrack e resolve exatamente esse problema.
Hoje ele tem dois modos:
• CLI
Você roda algo tipo:
**redtrack campaigns list --status active --json**
Cospe JSON ou CSV, funciona com pipe e dá pra plugar fácil em automações ou até em agentes de IA.
• TUI (modo painel)
Interface no terminal estilo dashboard.
Navega por contas, campanhas, anúncios e conversões só no teclado, com drill-down completo.
Stack que usei:
Go + Cobra + Bubble Tea v2 + Lipgloss + Bubbles
Binário único, zero dependência.
Ainda é MVP, mas já cobre:
• campaigns • conversions • dashboard com stats do dia Tô evoluindo pra CRUD completo de offers, networks e landings.
Algumas decisões que tomei e fiquei na dúvida se faz sentido pra quem usa tracking no dia a dia:
• Usei /campaigns/v2 (paginado) como padrão ao invés do v1 simples
• Config em arquivo local + env + flag (nessa ordem de prioridade)
• Na TUI, se não tiver API key, trava tudo e abre direto tela de setup
Se você trabalha com tráfego pago / afiliado e vive preso em dashboard lento, queria muito ouvir:
como você consulta performance hoje?
o que mais te irrita nessas ferramentas?
você usaria algo assim no terminal ou não faz sentido no seu fluxo?
Se fizer sentido pra você, me chama que eu te mando pra testar.
my Go rate limiter implementation that supports Fixed Window, Sliding Window, Token Bucket, Leaky Bucket, and middleware for net/http, Gin, Fiber, and Echo.
This release concentrated on correctness and distributed execution.
I refined the public API, unified rate-limit metadata and HTTP headers, addressed sliding window edge cases, and included Redis Lua scripts to atomically execute the built-in algorithms.
I also added Redis tests and benchmark examples with pre-Lua and post-Lua performance metrics.
GitHub Repository:
https://github.com/AliRizaAynaci/gorl
Go's govulncheck is solid for known CVEs. But it doesn't actually cover the full picturel like unmaintained packages, license violations or low OpenSSF Scorecard. Packages that simply shouldn't be in prod.
We have been building vet for filling this gap only.
vet is an open source SCA tool written in Go. It reads your go.mod / go.sum and evaluates each dependency against data from OSV, OpenSSF Scorecard, and other sources. The interesting part is how you express policy, it uses CEL rather than config files or flags
# Fail CI if any dep has critical CVEs or is unmaintained vet scan -D . \ --filter '(vulns.critical.size() > 0) || (scorecard.scores.Maintained == 0)' \ --filter-fail Or define a policy file you can version alongside your code:
# .vet/policy.yml name: production policy filters: - name: no-critical-vulns value: vulns.critical.size() > 0 - name: maintained value: scorecard.scores.Maintained < 5 - name: approved-licenses value: | !licenses.exists(p, p in ["MIT", "Apache-2.0", "BSD-3-Clause", "ISC"]) vet scan -D . --filter-suite .vet/policy.yml --filter-fail The filter input is a typed struct - vulns, scorecard, licenses, projects, pkg , so writing and testing expressions is straightforward. There's also a GitHub Action for CI integration.
Repo: https://github.com/safedep/vet
One addition worth calling out separately, time-based cooldown checks.
Most supply chain compromises rely on speed where a malicious version gets published, automated builds pull it within hours before detection catches up. A cooling-off period is a blunt but effective guardrail. vet supports this via a now() function in its CEL evaluator (landed via community contribution PR #682):
bash
vet scan -D . \ --filter-v2 '!has(pkg.insight.package_published_at) || (now() - pkg.insight.package_published_at).getHours() < 24' \ --filter-fail The !has(...) guard catches packages so new they haven't been indexed yet and those get blocked too. The duration is yours to set, 24h is a reasonable default, some teams go to 7 days.
I've been working on go-apispec, a CLI tool that generates OpenAPI 3.1 specs from Go source code using static analysis. No // @Summary comments, no struct tags, no code changes. Just point it at your project:
go install github.com/antst/go-apispec/cmd/apispec@latest apispec --dir ./your-project --output openapi.yaml It detects your framework (Chi, Gin, Echo, Fiber, Gorilla Mux, net/http), builds a call graph from route registrations to handlers, and traces through your code to figure out what goes in and what comes out.
Concrete example. Given this handler:
func CreateUser(w http.ResponseWriter, r *http.Request) { var user User if err := json.NewDecoder(r.Body).Decode(&user); err != nil { w.WriteHeader(http.StatusBadRequest) json.NewEncoder(w).Encode(ErrorResponse{Error: "invalid body"}) return } w.WriteHeader(http.StatusCreated) json.NewEncoder(w).Encode(user) } It produces:
/users: post: requestBody: content: application/json: schema: $ref: '#/components/schemas/User' responses: "201": description: Created content: application/json: schema: $ref: '#/components/schemas/User' "400": description: Bad Request content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' Both status codes, both response types, the request body — all inferred from the code. Fields without omitempty are marked required in the schema.
Some of the harder problems it solves:
switch r.Method { case "GET": ... case "POST": ... } → produces separate operations per method. This uses control-flow graph analysis via golang.org/x/tools/go/cfg to understand which code runs under which branch.APIResponse[User] → instantiates the generic struct with concrete types in the schema.w.Header().Set("Content-Type", "image/png") → that endpoint's response uses image/png, not the default application/json.What it can't do (static analysis limitations): reflection-based routing, computed string concatenation for paths ("/api/" + version), complex arithmetic across functions for status codes. These require runtime information.
Output is deterministic (sorted keys), so you can commit the spec and diff it in CI.
Background: This started as a fork of apispec by Ehab Terra, which provided the foundational architecture — the AST traversal approach, call graph construction, and the pattern-matching concept for framework detection. I've since rewritten most of the internals: type resolution pipeline, schema generation, CFG integration, generic/interface support, deterministic output, and the test infrastructure. But the original design shaped where this ended up, and I'm grateful for that starting point.
GitHub: https://github.com/antst/go-apispec
Try it: go install github.com/antst/go-apispec/cmd/apispec@latest && apispec --dir . --output openapi.yaml
Would love to hear if you try it on a real project — especially cases where it gets something wrong or misses a pattern. That's the most useful feedback.
Hey r/golang,
I just open-sourced HypGo — the Go framework built from the ground up for AI-Human collaboration in 2026. It developed independently by me(and Claude Code).
Traditional frameworks force AI to read hundreds of lines of handler code.
HypGo flips it: Schema-First + single Project Manifest. AI only reads 6 lines of metadata + 1 YAML (~500 tokens) instead of 5,000+.
Key features:
Quick start:
go install github.com/maoxiaoyue/hypgo/cmd/hyp@latest
hyp api myservice && cd myservice
And I haven't finished writing the English version of the wiki yet.
I will prepared for these pages for few days.
Thanks for your read.
| do you know any? [link] [comments] |