Saturday, December 6, 2025
80868175-a3e6-40e2-a01e-083e0cfdfc10
| Summary | ⛅️ Light rain throughout the day. |
|---|---|
| Temperature Range | 11°C to 16°C (52°F to 61°F) |
| Feels Like | Low: 49°F | High: 64°F |
| Humidity | 74% |
| Wind | 5 km/h (3 mph), Direction: 211° |
| Precipitation | Probability: 100%, Type: rain |
| Sunrise / Sunset | 🌅 06:41 AM / 🌇 04:36 PM |
| Moon Phase | Waning Gibbous (56%) |
| Cloud Cover | 74% |
| Pressure | 1011.41 hPa |
| Dew Point | 51.05°F |
| Visibility | 9.81 miles |
I’ve been trying to get this working for hours and I’m pulling my hair out.
I have a custom plugin for golangci-lint, which works totally fine locally. The way they recommend that you build this is with the “golangci-lint custom” command, which actually clones the source code for the linter behind the scenes in order to build the new binary.
This works fine locally, but when we run it in CI we get permissions issues surrounding the flags that are passed into the git clone command that runs as part of this.
It tries to run this which fails: “git clone --branch v2.4.0 --single-branch --depth 1 -c advice.detachedHead=false -q https://github.com/golangci/golangci-lint.git”
As far as I’m able to tell, this whole thing is failing because the runner has strict permissions and doesn’t allow suppress the -c detached head of false. Super annoying!
Has anyone been able to build a custom plugin in CI, who can share your workflow files? How is anyone doing this in CI?
I remember was hyped when these pragmas appeared, but does hey do anything? I still have entersyscall, exitsyscall ating considerable amount of time
Like for 4s of total call time, I have 0.5s for exitsyscall, 0.4s for entersyscall and 0.15s for some ospreenterexitenterblahblahblah...
Is there a way to remove these enter|exitsyscall ? m cgo code does not interact with go runtime in any way - just receive some parameters. Don't store any data from go, max it just process it, copy if needed and returns
Upd: screenshot of prof https://imgur.com/a/ZhtcHRr
upd2: i know the go is built the way it is, i just want to know if I measure everything correctly, i did everything go provides to optimize performance, so i can live with it and build my program around these limitations.
For instance, Say there’s a Library A and Library B that does the same thing (in-memory database). You need one of them to implement your solution, do you have a methodology or flow that you go through to pick the best one?
Something like taking into account release cadences, GitHub stars, etc?
Hello guys I am building a CLI tool and I was writing a function that iterates over an array of elements (strings) that hold a filepath and basically just a simple test I wanted to see if I am able at all to open them so I wrote this:
func TestMain (t *testing.T) { files := shared. GoFilesCrawler ("../") for file := range files { fileOp, err := os. Open (file) if err != nil { fmt. Println (err) } fmt. Println ("Able to open the file") defer fileOp. Close () } } And I was having a trouble because it told me that os.Open() Couldn't accept the argument as an integer, and I was wondering why it was supposing that file is an integer it was weird I mean I had the Go Wiki Range clauses just in front of my eyes and there was nothing talking about this so what I did was
func TestMain (t *testing.T) { files := shared. GoFilesCrawler ("../") for _, file := range files { fileOp, err := os. Open (file) if err != nil { fmt. Println (err) } fmt. Println ("Able to open the file") defer fileOp. Close () } } And this one works... So my question is why the first argument on the for range must be an integer? Must it be an integer, are there specific cases?
Built a CLI-first HTTP client in Go that combines Postman's features with Vim navigation and a fast load testing performance mode, all in your terminal with bubble tea.
What I did:
Why?
I found it annoying switching between Postman for dev work and separate tools for load testing, in addition to using my terminal to build my project anyway. I made a way to unify them with a single terminal based where I'm already doing my development with an interactive TUI for API exploration, CLI mode for benchmarking, and CI/CD.
GitHub: https://github.com/owenHochwald/Volt
Happy to discuss the implementation or share benchmark methodology if anyone's interested.
| Five years ago, we started building a MySQL-compatible database in Go. Five years of hard work later, we're now proud to say it's faster than MySQL on the sysbench performance suite. We've learned a lot about Go performance in the last five years. Go will never be as fast as pure C, but it's certainly possible to get great performance out of it, and the excellent profiling tools are invaluable in discovering bottlenecks. [link] [comments] |
Is there a way to parse strings which contain locally formatted numbers to integer/float using the standard packages?
With message.Printer I can format integer and float numbers to a locally formatted string based on a language.Tag.
But I need it the other way around, so I have a string containing a locally formatted number which I need to convert to a int/float based on a language.Tag.
Hi!
I built a plugin that exposes JetBrains IDE code intelligence through MCP, letting AI assistants like Claude Code tap into the same semantic understanding your IDE already has.
Now supports GO and GOLand as well.
Before vs. After
Before: “Rename getUserData() to fetchUserProfile()” → Updates 15 files... misses 3 interface calls → build breaks.
After: “Renamed getUserData() to fetchUserProfile() - updated 47 references across 18 files including interface calls.”
Before: “Where is process() called?” → 200+ grep matches, including comments and strings.
After: “Found 12 callers of OrderService.process(): 8 direct calls, 3 via Processor interface, 1 in test.”
Before: “Find all implementations of Repository.save()” → AI misses half the results.
After: “Found 6 implementations - JpaUserRepository, InMemoryOrderRepository, CachedProductRepository...” (with exact file:line locations).
It runs an MCP server inside your IDE, giving AI assistants access to real JetBrains semantic features, including:
LINK: https://plugins.jetbrains.com/plugin/29174-ide-index-mcp-server
Also, checkout the Jetbrains IDE Debugger MCP Server - Let Claude autonomously use IntelliJ/Pycharm/Webstorm/Golang/(more) debugger which supported GO from the start
Hi r/golang!
We are the JetBrains GoLand team, and we’re excited to announce an upcoming AMA session in r/Goland!
GoLand is the JetBrains IDE for professional development in Go, offering deep language intelligence, advanced static analysis, powerful refactorings, integrated debugging, and built-in tools for cloud-native workflows.
Ask us anything related to GoLand in, Go development, tooling, cloud-native workflows, AI features in the IDE, or JetBrains in general. Feel free to submit your questions in advance – this thread will be used for both questions and answers.
We’ll be answering your questions on December 8, 1–5 pm CET. Check your local time here.
Your questions will be answered by:
We’re looking forward to chatting with you!
I'm working on a small web app project, domain driven, each domain has handler/service/repo layer, using receiver method design, concrete structs with DI, all wired up in Main. Mono-repo containerised application, built-in sqlite DB.
App works great but I want to add testing so I can relieve some deployment anxiety, at least for the core features. I've been going around and around in circles trying to understand how this is possible and what is best practice. After 3 days I am no closer and I'm starting to lose momentum on the project, so I'm hoping to get some specific input.
Am I supposed to introduce interfaces just so I can mock dependencies for unit testing? How do I avoid fat interfaces? One of the domains has 14 methods. If I don't have fat interfaces, I'm going to have dozens of interfaces and all just for testing. After creating these for one domain it was such a mess I couldn't continue what genuinely felt like an anti pattern. Do I forget unit testing entirely and just aim for integration testing or e2e testing?
Hi Gophers!
I'm building a Flutter desktop app and I decided to write the app's backend in Go because it is a bit faster than Dart for what I'm doing, and in the future if I decide to add a server sync option I'll be able to reuse most of the backend code.
But I'm not sure which IPC for communication between Go and Flutter I use. Ideally I wanted something similar to flutter_rust_bridge or what the Wails framework offers, you specify structs and methods you want to expose to the frontend and run wails generate bindings and it creates bindings for JavaScript frontend to directly call Go methods as if they were native JS functions.
Is there anything similar for Go-Flutter, what are the options available beside the localhost http-based ones (REST, WebSocket, gRPC)?
Hi all,
For the longest part I’ve been doing normal Rust, and have gone through Jon’s latest video on the 1brc challenge and his brrr example.
This was great as a couple aspects “clicked” for me - the process of taking a raw pointer to bytes and converting them to primitive types by from_raw_parts or u64::from_ne_bytes etc.
His example resolves around the need to load data into memory (paged by the kernel of course). Hence it’s a read operation and he uses MADV to tells the system as such.
However I am struggling a wee bit with layout, even though I conceptually understand byte alignment (https://garden.christophertee.dev/blogs/Memory-Alignment-and-Layout/Part-1) in terms of coming up with a small exercises to demonstrate better understanding.
Let’s come up with a trivial example. Here’s what I’m proposing - file input, similar to the brrr challenge - read into a memory map, using Jon’s version. Later we can switch to using the mmap crate - allow editing bytes within the map - assume it’s a mass of utf8 text, with \n as a line ending terminator. No delimiters etc.
If you have any further ideas, examples I can work through to get a better grasp - they would be most welcome.
I’ve also come across the heh crate https://crates.io/crates/heh which has an AsyncBuffer https://github.com/ndd7xv/heh/blob/main/src/buffer.rs and I’m visualising something along these lines.
May be a crude text editor where its view is just a section (start/end) looking into the map - the same way we use slices. Just an idea…
Thanks!
Hey r/rust! I've been building production Rust systems for several years.
Recent projects:
SwiftDisc/Caelum/Zignal: Discord API libraries (3 languages)
I'm taking on freelance projects for:
High-performance APIs & backends
Real-time WebSocket systems
Discord bots & integrations
Concurrent/parallel systems
GitHub: M1tsumi
DM me if you need help with Rust projects!
I found it interesting that the error causing the outage was already mitigated in their rust version of the old proxy. In the lua version they neglected to do a runtime check when accessing an object, resulting in ‘attempt to index field 'execute' (a nil value)’
This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.
Hello everyone! Thank you guys for supports and suggestions! I didn’t expect my initial post is received very positively.
Since the first post, I've been working non-stop (prob, ig) and today I'm happy to annouce the 0.3.0 version.
This takes the most of time fr.
An API where you can group items based on their keys and calculate aggregated values in each group. Inheriting the "spirit" of this crate, you can aggregate sum and max declaratively also!
To summarize, it's similar to SELECT SUM(salary), MAX(salary) FROM Employee GROUP BY department;.
Example (copied from doc):
use std::collections::HashMap; use better_collect::{ prelude::*, aggregate_struct, aggregate::{self, AggregateOp, GroupMap}, }; #[derive(Debug, Default, PartialEq)] struct Stats { sum: i32, max: i32, version: u32, } let groups = [(1, 1), (1, 4), (2, 1), (1, 2), (2, 3)] .into_iter() .better_collect( HashMap::new() .into_aggregate(aggregate_struct!(Stats { sum: aggregate::Sum::new().cloning(), max: aggregate::Max::new(), ..Default::default() })) ); let expected_groups = HashMap::from_iter([ (1, Stats { sum: 7, max: 4, version: 0 }), (2, Stats { sum: 4, max: 3, version: 0 }), ]); assert_eq!(groups, expected_groups); I meet quite a lot of design challenges:
(RefCollector) base) due to this: map value being fixed. Because the values are already in the map, The aggregations have to be happening in-place and cannot transform, unlike collectors when their outputs can be "rearranged" since they're on stack. Also, adaptors in (Ref)Collector that require keeping another state (such as skip() and take()) may not be possible, since to remove their "residual" states there is no other choice but to create another map, or keep another map to track those states. Both cost allocation, which I tried my best to avoid. I tried many ways so that you don't need to transform the map later. Hence, the traits, particularly (Ref)AggregateOp, look different.better_collect::Sum and better_collect::aggregate::Sum). Should I rename it to AggregateSum (or kind of), or should this feature be a separate crate?Hence, the feature is under the unstable flag, and it's an MVP at the moment (still lack this and that). Don't wanna go fully with it yet. I still need the final design. You can enable this feature and try it out!
I've found a better name for then, which is combine. Figured out during I made the aggregate API. then is now renamed to it.
And copied and cloned are renamed to copying and cloning respectively.
And more. You can check in its doc!
Collections now don't implement (Ref)Collector directly, but IntoCollector.
I found myself importing traits in this crate a lot, so I group them into a module so you can just wildcard import for easier use.
I don't export Last or Any because the names are too simple - they're easy to clash with other names. ConcatStr(ing) are exported since I don't think it can easily clash with anything.
(Ref)Collector are now dyn-compatible! Even more, you don't need to specify the Output for the trait objects.
Collector implementations for types in other crates.itertools feature: Many adaptors in Itertools become methods of (Ref)Collector, and many terminal methods in Itertools become collectors. Not every of them, tho. Some are impossble such as process_results or tree_reduce. I've made a list of all methods in Itertools for future implementations. Comment below methods you guys want the most! (Maybe a poll?)Like why do literally all new posts have "0" votes?
I have seen this happen for many months, on all new posts. I never see anything like this in other subs.
The TokioConf 2026 Call for Speakers closes in 3 days!
We need your help to make our first TokioConf great. If you’ve learned something building with Tokio, we’d love to hear it. First-time speakers are welcome. Submit your proposal by Dec 8th.
i am not the author of the blog post, i just think it’s always good news when projects that actually matter start adopting rust, especially for us in the so‑called rust cult.
of course, the usual discussions may or may not pop up again, as they always do.
i have a lot of respect for c developers; most of the critical tools in my own development workflow are written in c, and that’s not going to change anytime soon.
so instead of flaming each other, let’s just focus on writing good software, in whatever language we use.
i really enjoy the rust community, but even more than that i enjoy clippy, and every rust dev probably knows the feeling that the longer you write rust, the more you start to rely on its error messages and suggestions.
Fracture is a proof-of-concept programming language that fundamentally rethinks how we write code. Instead of forcing you into a single syntax and semantics, Fracture lets you choose - or even create - your own. Write Rust-like code, Python-style indentation, or invent something entirely new. The compiler doesn't care. It all compiles to the same native code. (There will likely be a lot of bugs and edge cases that I didn't have a chance to test, but it should hopefully work smoothly for most users).
(Some of you might remember I originally released Fracture as a chaos-testing framework that is a drop-in for Tokio. That library still exists on crates.io, but I am making a pivot to try to make it into something larger.)
Most programming languages lock you into a specific syntax and set of rules. Want optional semicolons? That's a different language. Prefer indentation over braces? Another language. Different error handling semantics? Yet another language.
Fracture breaks this pattern.
At its core, Fracture uses HSIR (High-level Syntax-agnostic Intermediate Representation) - a language-agnostic format that separates what your code does from how it looks. This unlocks two powerful features:
Don't like the default syntax? Change it. Fracture's syntax system is completely modular. You can:
The same program can be written in multiple syntaxes - they all compile to identical code.
Here's where it gets interesting. Glyphs are compiler extensions that add semantic rules and safety checks to your code. Want type checking? Import a glyph. Need borrow checking? There's a glyph for that. Building a domain-specific language? Write a custom glyph.
Glyphs can:
Think of glyphs as "compiler plugins that understand your intent."
juice sh std::io cool main)( +> kind | io::println)"Testing custom syntax with stdlib!"( bam a % true bam b % false bam result % a && b wow result | io::println)"This should not print"( <> boom | io::println)"Logical operators working!"( <> bam count % 0 nice i in 0..5 | count % count $ 1 <> io::println)"For loop completed"( gimme count <> use shard std::io; fn main() -> i32 { io::println("Testing custom syntax with stdlib!"); let a = true; let b = false; let result = a && b; if result { io::println("This should not print"); } else { io::println("Logical operators working!"); } let count = 0; for i in 0..5 { count = count + 1; } io::println("For loop completed"); return count; } These compile down to the same thing, showing how wild you can get with this. This isn't just a toy, however. This allows for any languages "functionality" in any syntax you choose. You never have to learn another syntax again just to get the language's benefits.
Glyphs are just as powerful, when you get down to the bare-metal, every language is just a syntax with behaviors. Fracture allows you to choose both the syntax and behaviors. This allows for unprecedented combinations like writing SQL, Python, HTML natively in the same codebase (this isn't currently implemented, but the foundation has allowed this to be possible).
Fracture allows for configurable syntax and configurable semantics, essentially allowing anyone to replicate any programming language and configure it to their needs by just changing import statements and setting up a configuration file. However, Fracture's power is limited by the number of glyphs that are implemented and how optimized it's backend is. This is why I am looking for contributors to help and feedback to figure out what I should implement next. (There will likely be a lot of bugs and edge cases that I didn't have a chance to test, but it should hopefully work smoothly for most users).
curl -fsSL https://raw.githubusercontent.com/ZA1815/fracture/main/fracture-lang/install.sh | bash I, like many in scientific computing, find my self compelled to migrate my code bases run on gpus. Historically I like coding in rust, so I’m curious if you all know what the best ways to code on GPUs with rust is?
After more or less two months of work, I'm happy to announce hayro-jpeg2000, a Rust crate for decoding JPEG2000 images. JPEG2000 images are pretty rare (from what I gather mostly used for satellite/medical imagery, but they are also common in PDF files, which was my main motivation for working on this crate), so I presume most people won't have a use for that, but in case you do... Well, there exists a crate for it now. :)
This is not the first JPEG2000 decoder crate for Rust. There is jpeg2k, which allows you to either bind to the C library OpenJPEG or to use the openjp2-rs crate, which is OpenJPEG ported to Rust via c2rust. The disadvantage of the latter is that it is still full of unsafe code and also not very portable, and for the former you additionally also have to rely on a C library (which doesn't exactly have a good track record in terms of memory safety :p).
I also recently stumbled upon jpeg2000 which seems to have picked up activity recently, but from what I can tell this crate is not actually fully functional yet.
With hayro-jpeg2000, you get a complete from-scratch implementation, which only uses unsafe code for SIMD, and if you don't want that, you can just disable that and have no single usage of unsafe at all anywhere in the dependency tree! The only disadvantage is that there is still a performance and memory efficiency gap compared to OpenJPEG, but I think there are avenues for closing that gap in the future.
I hope the crate will be useful to some. :)
Hi guys! Just wanted to share a little project I recently made called "TapeHead". It's a CLI tool that allows you to randomly seek, read and write to a file stream. I made it because I couldn't find a tool that does same.
You can find it here: https://github.com/emamoah/tapehead
Here's a preview snippet from the readme:
File: "test.txt" (67 bytes) [RW] [pos:0]> seek 10 [pos:10]> read . 5 ello! [in:5, pos:15]> read 9 5 hello [in:5, pos:14]> quit There are a couple of features I might consider adding, but I want to keep it very simple with no dependencies, at least for now.
You can try it out and let me know what you think!
Hello, I'm new to Rust, and I'm learning it in my private time. I'm writing a small library that internally uses Tokio. What are the best practices in such a situation? While developing the library, there is a fixed version for Tokio. However, this should fit the version of Tokyo that is used by the user of the library. What's the best way of doing that? Or is this in itself a mistake and I should separate the Tokio environment of the library from that of the user? If so, what is the recommended way so that the user can still configure the Tokio environment of the library? Config objects maybe?
Simple question. Why is something as
impl<T> FroreignTrait for T where T: OwnedPrivateTrait {...} Not valid?
I do understand why orphan rule exists, but in such a case, T is implicitly owned since the trait is derived from is itself owned. So it shouldn't be to hard for the compiler to understand that this blanket implementation doesn't enter in conflict with orphan rule.
Do I miss something ?
`NonNull` is like *mut T but in combination with Option ( `Option<NonNull<T>>`), it forces you to check for non null when accepting raw pointers through FFI in Rust. Moreover _I think_ it allows the compiler to apply certain optimizations.
The things is that we also need the *const T equivalent, as most C APIs I am working with through FFI will have either a `char *` or `const char *`. So even though I can implement the FFI bridge with `Option<NonNull<std::ffi::c\_char>>`, what about the `const char *` ?
FYI, this crate is NOT mine. this crate is still indev and not yet released on crates.io
With these two rust-nightly features, we can do that:
#![feature(generic_const_exprs)] #![feature(specialization)] struct Fibo<const I: usize>; struct If<const B: bool>; trait True {} impl True for If<true> {} trait FiboIntrinsic { const VAL: usize; } impl<const I: usize> FiboIntrinsic for Fibo<I> where If<{ I > 1 }>: True, Fibo<{ I - 1 }>: FiboIntrinsic, Fibo<{ I - 2 }>: FiboIntrinsic, { default const VAL: usize = <Fibo<{ I - 1 }> as FiboIntrinsic>::VAL + <Fibo<{ I - 2 }> as FiboIntrinsic>::VAL; } impl FiboIntrinsic for Fibo<0> { const VAL: usize = 0; } impl FiboIntrinsic for Fibo<1> { const VAL: usize = 1; } const K: usize = <Fibo<22> as FiboIntrinsic>::VAL; It works at compile-time but it seems like having much worse performance than `const fn` ones, which means it will take a lot of time compiling when you increase the number of iterations. Can someone tell the root cause?
So, I have a struct,
rs pub struct Viewer { pub parsed: markdown::mdast::Node }
And a method
rs pub fn new() -> Self { Self { // ... are omitted arguments parsed: markdown::to_ast("# text", ...)? } }
How exactly should I handle calling Viewer::new() if I want it to return Self(Viewer) instead of a Result<...>?
It currently has incorrect return type since the
to_ast()function might return an error.
How do I make it so that any internal errors are dropped(like ? but without returning any Result<...>)?