Table of Contents

Hacker News

Texas Instruments has launched the TI-84 Evo, a redesigned graphing calculator featuring an icon-based home screen, simplified keypad, smarter menus, and a contextual help status bar. Priced at $160, it comes in seven color options with 50% more graphing area and a USB-C charging port. The biggest technical change is a shift from three decades of Z80/eZ80 architecture to an ARM Cortex CPU at 156MHz—triple the previous 48MHz—with the OS apparently rewritten natively for ARM rather than running an eZ80 emulator. The device supports Python programming, includes a four-year online calculator subscription, and offers 3.5MB of user-accessible memory. It is exam-approved for the SAT, PSAT, AP, and ACT, and marketed as a distraction-free classroom tool. However, those same exams are already transitioning to computerized formats embedding Desmos, raising questions about long-term demand for dedicated handheld calculators.

Comments: Commenters are most intrigued by the long-overdue move from Z80/eZ80 to ARM Cortex, calling it a major engineering effort after 30+ years, though 3.5MB user memory and 156MHz clock draw comparisons to ESP32 microcontrollers. The $160 price is widely criticized—users note that $12–$20 Casio scientific calculators cover all high school math needs, and a budget laptop costs similarly. The absence of a CAS despite TI's own Nspire CAS existing for 15 years is a recurring complaint. Many are skeptical of "exam approved" marketing given SAT, PSAT, AP, and ACT apps already embed Desmos. Nostalgia is strong, with users sharing stories of programming TI-83/84s with Drug Wars-style BASIC games. Alternatives raised include NumWorks, SwissMicros RPN clones, HP calculators, and emulators like Graph89. Python support draws both enthusiasm and exam-integrity concerns. Battery life with rechargeable versus AAA is flagged as a real tradeoff. TI is broadly characterized as exploiting an education near-monopoly—reinforced by teacher familiarity and TI-specific curricula—while underdelivering on hardware for the price.

The 1932 Psycho-phone—a timer-triggered phonograph playing overnight affirmations—launched fascination with effortless sleep learning, though early studies were discredited by 1954 for failing to verify true unconsciousness. Modern research revived the field via "targeted memory reactivation": scent or sound cues during verified sleep improve next-day recall. A 2014 study found smokers exposed overnight to cigarettes-and-rotting-fish reduced consumption by 30%—more than those exposed while awake. Lucid dreamer studies go further: labs across four countries held real-time dream conversations, delivering math problems and receiving eye-movement responses verified by brain-wave monitoring. Konkoly's study found participants solved 42% of dream-presented puzzles versus 17% of others, with highest solve rates in ordinary dreams. Sleeping minds may handle 3D and associative thinking more freely than waking ones. However, memory reactivation can disrupt sleep architecture, undermining natural memory consolidation. Researchers caution against "colonizing" sleep with waking goals, arguing dreams serve their own poorly understood purposes.

Comments: Users validate the research through anecdotes: a programmer discovered a shell-injection vulnerability exactly as dreamed, a mathematician solved two weeks of combinatorics problems by sleeping on each overnight, and a musician reported reinforcing guitar technique through dream practice. The incubation effect—receiving insight hours after stepping away—is widely recognized, with some experiencing it twice weekly. Skeptics question whether lucid dreamers form a valid experimental population and suggest a cleaner control would simply test whether prompted dreamers recall delivered information upon waking. Several users cite historical precedent including Edison's hypnagogic problem-solving and the term "hypnopedia." A neurotech founder notes that memory reactivation methods exist that avoid sleep disruption. Humor threads through the discussion—users joke about the brain billing overnight hours and the absurdity of optimizing sleep for productivity. Some users report dreaming in foreign languages after extended immersion, treating it as informal evidence for sleep-based language consolidation.

Eka, a Cambridge startup by MIT professor Pulkit Agrawal and ex-Google DeepMind researcher Tuomas Haarnoja, built a robot arm that handles light bulbs, loose keys, and chicken nuggets with striking adaptability. Unlike VLA models trained on human demonstrations, Eka trains robots entirely in simulation via reinforcement learning, letting them invent solutions—similar to AlphaZero. Key innovations are custom touch-sensor grippers and a vision-force-action model incorporating physics principles like mass and inertia, targeting the sim-to-real gap that sank OpenAI's Dactyl. Dactyl solved a sensor-embedded Rubik's Cube but couldn't generalize or recover from slippage; Eka claims to have solved this, though exact methods are trade secrets. A chicken nugget demo shows the robot improvising toss distance based on conveyor speed, illustrating emergent adaptability. Founders compare themselves to GPT-1: early but with nascent general physical intelligence, targeting superhuman dexterity. The same simulation-scaling approach should, they argue, extend to fine manipulation like smartphone assembly.

Comments: Commenters broadly reject the "ChatGPT moment" framing, noting ChatGPT succeeded as an immediately useful consumer product while industrial robot arms are not. The real benchmark cited is Amazon's bin-picking—selecting arbitrary items from unstructured bins—which has resisted automation for years; until solved at scale, breakthrough claims remain unverified. Robotics pioneer Rodney Brooks, long skeptical of humanoid dexterity hype due to sensor inadequacy, reportedly called Eka the closest he has seen to cracking the problem and is an investor/adviser. Some apply a personal heuristic that WIRED coverage signals technologies that have peaked or will fail, citing Better Place as a cautionary example. Others note the piece overlooks comparable demos like Unitree's Spring Festival Gala performance. A Paul Graham "submarine" essay link implies skepticism about editorial independence from PR. One commenter notes the "ChatGPT moment" framing is itself worrying, given ChatGPT's contested societal impact.

The May 2026 HN "Who is Hiring" thread features postings from over 60 companies spanning AI/ML, robotics, fintech, healthcare, and infrastructure. Remote roles are widely available, many restricted to US candidates; others like MONUMENTAL (Amsterdam) and General Fusion (Richmond, BC) require onsite presence. Compensation ranges from ~$80K for QA roles to $485K at Y Combinator, with equity at most startups. Companies span seed through Series C, including Factory AI ($150M Series C) and Solace Health ($207M total raised, unicorn). Unique roles include Project Debug (automated mosquito rearing achieving 95% female reduction in Fresno) and Coop (AI-powered chicken coops with predator-detection CV). Several companies explicitly expect AI-first engineering with daily Cursor and Claude Code use. Dominant stacks include TypeScript/React, Python, Go, Elixir, Rust, and Ruby/Rails. Verticals covered include energy forecasting, air traffic control, cattle feed yard software, geologic modeling, and veterinary EMR. Most founders post personally and encourage applicants to mention HN.

Comments: Y Combinator is hiring for its own 18-person software team at $185K–$485K with fund carry, building tools that power $6B in investments. Infrastructure roles are prominent: Loophole Labs (Kubernetes pod hibernation in 50ms, Zig-native CRIU), exe.dev (custom VMM and global load balancer), and Railway (home-rolled hypervisors, virtio drivers). Several companies stress bootstrapped profitability: Missive (10M+ emails/day, 100% founder-owned), Amplify Renewables (ex-D.E. Shaw/Optiver), and Redbook Software (cattle industry, team of 5). Healthcare is well represented: BillionToOne (prenatal/oncology diagnostics), Pathos AI (200+ petabytes of oncology data), and Instinct Science (veterinary EMR). Niche-domain postings include PlantingSpace (applied category theory, Bayesian modeling), Stellar Science (space situational awareness, no CRUD), and Air Space Intelligence (sequencing 40,000 flights/day). Loophole Labs (Go/Zig/eBPF) and yeet (Rust/BPF runtime) represent deep systems engineering roles. Multiple founders confirm prior successful hires from HN threads and prioritize applicants who mention the source.

Josh Comeau has launched an "Open House" preview for his new course, Whimsical Animations, temporarily making select lessons public so prospective students can evaluate the content before enrolling. The course targets beyond-the-basics animations and interactions using modern CSS, JavaScript, SVG, and Canvas. Comeau notes that most lessons are part of a larger linear arc, so he deliberately chose lessons that provide standalone value rather than just demonstrating his teaching format. The full course homepage at whimsy.joshwcomeau.com contains more details than the preview page itself.

Comments: Commenters are broadly enthusiastic, with several drawing on firsthand experience with Comeau's prior courses. Users describe his courses as expensive but far superior in quality to typical online offerings, with one crediting his CSS course with a significant skill jump that predated the AI era. The animation performance lesson included in the preview is specifically called out as highly educational. His React course also receives a strong, direct recommendation. Overall, commenters portray Comeau as a knowledgeable, approachable instructor who teaches material other courses often overlook.

whohas is a single-file Perl command-line tool by Philipp L. Wesche that queries package availability and version numbers across 16 Linux and BSD distributions simultaneously, including Arch, Debian, Fedora, Gentoo, Ubuntu, FreeBSD, MacPorts, and Cygwin. It outputs results in fixed-width columns covering package name, version, date, repository, and a URL for further details, making it pipe-friendly with grep and cut for filtering by distro or package name. Its primary audience is package maintainers seeking to learn from ebuilds and pkgbuilds in other distributions, though regular users can use it to find which distros carry specific software. Version tracking across Debian release branches (stable, testing, unstable, oldstable) is explicitly supported. The tool has hardcoded repository domains and its last release was May 19, 2015, making it effectively abandoned, though it remains open source and forkable. Related services mentioned include Repology for cross-distro version tracking, pkgs.org for web-based package searches, and Debian's namecheck utility.

Comments: Users quickly note the tool is a single Perl file with hardcoded repository URLs and no release in over a decade, flagging it as abandoned but forkable. Modern tools have largely displaced its use case: Nix users point to nix-locate for binary-path-based lookups, while Repology and pkgs.org serve the web-search angle. One developer has built a similar cross-package-manager search tool modeled after uv's install UX but notes a "search" command is still missing. Others argue the whole space is ripe for an AI agent to aggregate all distro package metadata into a local SQLite database with incremental updates, a web UI, and a CLI. A Linux GUI task manager developer notes a structural gap: unlike Windows executables, Linux binaries carry no organizational ownership metadata, and they've begun bundling a SQLite lookup script to surface upstream and funding information per process. Users note Homebrew for Linux and Flatpak have reduced distro-specific packaging concerns for many, and that whohas would pair well with distrobox or Bedrock Linux for cross-distro workflows. The consensus is that the core idea still earns its keep, even if the implementation is stale.

lib0xc is a Microsoft-released, MIT-licensed C library codifying safer systems programming patterns into documented, tested APIs using C11 with GNU extensions, supporting both clang and gcc. Rather than reinventing C's type system, it targets high-value incremental improvements: cursor.h offers allocation-free in-memory I/O streams; context.h provides bounds-verified tagged pointer passing that traps on type-size mismatches; int.h traps signed-to-unsigned overflows at runtime instead of silently truncating; and io.h eliminates PRIu32/PRIu64 format macros. The library uses C preprocessor macros and compile-time size verification to enforce safety without dynamic allocation. Systems programming utilities include structured logging, unit testing, hash tables, buffer objects, a unified Mach-O/ELF linker set, and POSIX error utilities. APIs mirror standard library naming for drop-in familiarity and embrace clang's -fbounds-safety extensions while remaining source-compatible with existing C code. Build support covers macOS and Linux on arm64 and x86_64, with porting requiring only platform-specific hooks for panic, memory, logging, and buffer types.

Comments: The library's author explains that lib0xc formalizes safer C patterns previously passed down as industry oral tradition into real APIs with documentation and testing, with Microsoft holding copyright and permitting MIT release. Questions arise about whether it sees internal production use at Microsoft or is a side project, and whether MSVC can compile the required GNU extensions. Some argue that C, C++, and POSIX standards should themselves add safer APIs and deprecate unsafe ones via edition-style migrations, noting this approach has precedent and that engineering concerns aren't disqualifying blockers. Commenters express interest in incremental adoption into existing codebases but find the README unclear on whether that workflow is well-supported. Others observe that spatial memory safety could be largely solved in both C and C++ by mandating safer standard interfaces. Mild naming confusion arises from similarity to libxc, an unrelated exchange-correlation library. General reception is positive, with particular interest from developers on systems and embedded C codebases, and clang's -fbounds-safety cited as an especially valuable feature to explore.

A person's account was breached at a European merchant, exposing the masked card number (BIN + last 4 digits) and full expiration date — data PCI DSS explicitly permits displaying. Using the structured PAN format (ISO/IEC 7812, Luhn check digit), attackers narrowed unknown digits, then exploited the bank's descriptive error codes — distinguishing invalid card number, expired date, or wrong CVV — to brute-force the CVV at ~6 requests/second via rotating proxies. After probing validity through 3D Secure merchants, they switched to a non-3D-Secure-exempt merchant to drain the card, funneling funds to an e-wallet for cash withdrawal. The victim recovered via chargeback, but the e-commerce site refused to treat it as a vulnerability, citing PCI DSS compliance. Payment insiders were unsurprised, noting some merchants process transactions without even requiring an expiration date. Physical receipts, which display the same partial card data under PCI DSS, create an identical offline attack vector.

Comments: Commenters surface several compounding issues. Digital wallets persist card credentials even after cancellation — one user found 99 active wallets on a canceled card requiring a manual phone call to purge, with fraud resuming on the replacement card until all wallets were cleared. The structural split between Authorization and Settlement is flagged as a root cause: settlement is trust-based with no cryptographic verification, letting merchants bill accounts with minimal authentication. Revolut Japan confirmed the attack is not theoretical when entropy-based brute-forcing hit cards sharing the same IIN and expiration month. The absence of mandatory 3D Secure in the US is criticized as a global systemic gap, since issuers must allow non-3DS requests or block US commerce. Payment processors do penalize merchants for card enumeration, though enforcement remains reactive. Solutions discussed include virtual single-use cards, dynamic CVV2 (as Amex implements), and replacing static card credentials with cryptographic payment schemes — with commenters noting the model is largely unchanged since the 1970s.

A Falcon 9 upper stage (designated 2025-10D) from the January 15, 2025 dual-lander mission is predicted to impact the Moon on August 5, according to astronomer Bill Gray of Project Pluto, whose orbital tracking software identified the collision course. The rocket body has been looping in a highly elliptical orbit ranging from 220,000 km at perigee to 510,000 km at apogee — an orbit that intersects the Moon's path — since the launch that sent Firefly Aerospace's Blue Ghost (which landed successfully) and ispace's Hakuto-R (which failed) to the lunar surface. The upper stage was too high for US military tracking but visible to amateur astronomers and asteroid surveys. Impact speed is calculated at 2.43 km/s (~8,700 km/h), equivalent to roughly Mach 7 if air were present — though the Moon has neither air nor sound. Gray notes the event poses no danger to humans or nearby probes and will likely not be visible, but emphasizes it underscores carelessness in disposing of leftover space hardware, adding that as lunar human presence grows, such uncontrolled trajectories will warrant greater concern.

Comments: Users debate the use of "Mach 7" as a descriptor, correctly noting speed-of-sound units are meaningless in a vacuum and suggesting alternatives like furlongs per fortnight for comedic effect. Some express curiosity about whether the impact will cause the Moon to "ring like a bell" seismically, referencing past Apollo-era observations of this phenomenon. One user points to the surprisingly long Wikipedia list of artificial objects already on the Moon, noting non-spacecraft items aren't comprehensively catalogued. Others suggest the uncontrolled impact should instead be made intentional and controlled. A comment draws an ironic comparison noting Blue Origin reached Mars before SpaceX despite SpaceX's long-stated "multiplanetary" mission, while referencing a NASA Blue Origin announcement — though this conflates lunar and Mars missions. Enthusiasm for Starship is tempered by acknowledgment it remains significantly behind schedule, even as its engines are praised. The overall thread mixes technical critique of the article's framing, genuine scientific curiosity, and broader skepticism about SpaceX's mission priorities.

A Dunwoody, Georgia resident discovered through public records requests that Flock Safety employees accessed city surveillance cameras — including ones at a children's gymnastics room and the Marcus Jewish Community Center pool — to demonstrate the product to other police departments. Flock confirmed the access occurred under a "demo partner program" authorized by the city, and denied wrongdoing, saying employees accessed cameras with explicit city permission as part of their jobs. However, Flock's own FAQ states "nobody from Flock Safety is accessing or monitoring your footage," directly contradicting the admitted practice. The access logs also revealed Flock's system encompasses not just city-owned cameras but also private business cameras at fitness studios and community centers. After the public records exposure, Flock agreed to stop using Dunwoody's cameras for demos and pledged to train employees to conduct demos only in public locations like retail parking lots. Flock defended its practices by claiming it is more transparent than other surveillance companies precisely because it creates access logs obtainable via public records requests.

Comments: Commenters raise a pointed question: why Flock lacks a dedicated demo environment using synthetic data, which is standard in the software industry, making its use of live cameras in sensitive locations inexplicable by technical necessity. Questions also focus on whether community centers like the MJCCA are city-owned, whether the city had the legal authority to grant Flock access to private cameras, and whether parents were ever notified. A key concern is that Flock's business model is explicitly designed to eliminate barriers to real-time surveillance access — a qualitatively different threat than systems requiring device-specific police requests. Some observe that Flock's "radical transparency" framing, praising itself for creating access logs, is a deflection from normalizing mass surveillance infrastructure. The YC President's continued support for Flock draws criticism, with users questioning his defense of the practice. Others note the demo environment is functionally production when live data is used — the label is irrelevant. Several users call for all Flock footage to be subject to FOIA requests, and note this story duplicates a prior well-attended HN thread.

WhatCable is a free macOS menu bar app by Darryl Morley that reads USB-C cable capabilities from four IOKit service families without private APIs or entitlements. Per port, it shows connection type, charging diagnostics identifying the bottleneck (cable, charger, or Mac), e-marker info (USB 2.0 to 80 Gbps, 3A/5A ratings, vendor chip), charger PDO voltage profiles with live negotiated selection, connected device identity, and active transports. It requires macOS 14 and Apple Silicon; Intel Macs are unsupported because Intel Thunderbolt 3 controllers don't expose USB-PD state via IOKit. A bundled whatcable CLI supports human-readable, JSON, watch, and raw output modes, installable via Homebrew which auto-symlinks the CLI. The app is notarized and Developer ID-signed with no Gatekeeper warnings. It trusts e-marker data as advertised—counterfeit cables misreporting capabilities cannot be caught by software. App Sandbox restrictions prevent App Store distribution, and PD 3.2 EPR variants may need future decoder updates.

Comments: Users are enthusiastic, with a blind user noting it replaces a $16 hardware cable tester. The developer shipped 16 releases in 7 hours incorporating live HN feedback, adding CLI and Dock-mode options. One commenter ported a similar UI to KDE Plasma 6 using GPT-5.5 in 10 minutes for $2. Several users report "No USB-C Ports Detected" on M1 MacBook Pros running Sonoma, and one sees connected devices duplicated across both ports. ChromeOS is cited as having comparable e-marker reading via Discover Identity messages; Linux alternatives like lsucpd are noted. Concern was raised about counterfeit cables misreporting e-marker data—the app confirms it cannot detect this, as it trusts the chip. Curiosity arose about how the undocumented IOKit SPI (e.g., IOPortTransportComponentCCUSBPDSOP) was discovered, with some wary of AI-generated code quality. A "plugged upside down" warning confused users since USB-C is reversible by design. Requests for Homebrew support, Linux/Windows ports, and a community cable-quality leaderboard were common.

AdamFusion is an AI agent add-in for Autodesk Fusion 360 that drives CAD operations natively. Installation requires dropping the bundle into Fusion's AddIns folder — on Mac under ~/Library/Application Support/Autodesk/Autodesk Fusion 360/API/AddIns/, on Windows under %APPDATA%\Autodesk\Autodesk Fusion 360\API\AddIns\ — then enabling it via Shift+S → Add-Ins → Run, with Run on Startup to auto-load thereafter. The tool also supports OnShape, raising questions about unconstrained sketches, parametric relationships, and API quota consumption per feature call. Engineers broadly debate whether text-to-CAD is genuinely useful, many arguing that writing accurate dimensional prompts takes longer than direct manipulation and that 3D modeling is one of the more satisfying parts of the job. Alternative CAD-as-code approaches like OpenSCAD are suggested as more LLM-friendly since source files are composable scripts. Pricing details referencing "creative generations" remain unclear to prospective users.

Comments: Mechanical engineers broadly push back on text-to-CAD, arguing that crafting accurate dimensional prompts takes longer than direct manipulation and that modeling is actually enjoyable — preferring AI for design intent and requirements instead. A competing developer behind GrandpaCAD notes their product targets beginners while AdamFusion targets professionals, sharing benchmarks showing Opus 4.7 and GPT 5.5 are comparable in generation quality but GPT 5.5 is slower and far more expensive, with Gemini 3.1 cited as the original breakthrough model. Technical skeptics question whether Fusion's internal data model is structured enough for LLMs, and suggest CAD-as-code tools like OpenSCAD are inherently better suited. OnShape-specific concerns include unconstrained sketches lacking parametric relationships and risk of exhausting annual API quotas. Others criticize the focus on closed commercial platforms, request FreeCAD support, and flag confusion over pricing. One commenter notes AI CAD tooling is already arriving faster than engineers expected.

A guide describes a jailbreak technique targeting LLMs including GPT-4o, o3, Claude, and Gemini that exploits safety guardrails by framing harmful requests—drug synthesis, ransomware, keyloggers, and carfentanyl synthesis—within LGBT roleplay contexts. The technique, at version 1.5, works by asking how a gay or lesbian person would describe illegal processes rather than directly requesting them. The author theorizes that models are trained to be accommodating toward LGBT contexts to avoid appearing discriminatory, and this compliance overrides safety refusals. Example prompts combine roleplay framing, character obfuscation (splitting keywords with symbols), indirect phrasing, and reverse psychology framing such as asking what to avoid to prevent harm. The guide claims success against GPT-4o, o3, Claude 4 Sonnet, Claude 4 Opus, and Gemini 2.5 Pro, and suggests combining the technique with additional obfuscation for stronger results.

Comments: Commenters are largely skeptical of the author's explanation, with researchers pointing out the technique is simply another form of roleplay jailbreaking and obfuscation rather than anything specific to LGBT framing, citing academic work showing language choice and roleplay are the real drivers. Several note that similar approaches—asking models to emulate a Linux terminal, roleplay as a grandma, or adopt an uncensored persona—have existed for years. Many find the author's "political overcorrectness" theory baseless conjecture rather than evidence-based analysis, and note the actual outputs shown lack meaningful technical depth. Testers report the technique no longer works reliably on current models, with GPT flagging ransomware prompts as cybersecurity risks. The discussion ranges from humorous observations to serious critiques about AI safety, with some arguing that securing LLMs against social-engineering attacks may be fundamentally impossible without withholding training data on sensitive topics, and that the field lacks rigor in addressing these vulnerabilities systematically.

UC Davis civil engineering professor Jay Lund estimates California AI data center water use at 32,000–290,000 acre-feet per year—roughly 0.08%–0.7% of the state's 40 million acre-feet annual total—using physics-based heat dissipation formulas verified against four AI chatbots (ChatGPT, Claude, Gemini, Copilot). A best estimate of ~20,000 acre-feet equals irrigating 10,000–100,000 of California's 7 million agricultural acres. Lund argues media and advocates exploit industry opacity to substitute speculation for honest estimation, drawing parallels to historical tech anxieties—vaccines, chlorination, automobiles—where some fears proved illusory and others warranted. Water-scarce regions like the arid West face more acute impacts, while surplus-capacity areas might welcome data center revenue. His key call is for quantitative grounding in public discourse, with AI tools themselves offering a path toward faster, more honest preliminary estimates.

Comments: Commenters broadly agree public fears are exaggerated but challenge key assumptions: the open-loop vs. closed-loop cooling distinction matters, as cheaper open-loop systems consume far more water. A Google data center using 2–8 million gallons of drinking water daily—straining local infrastructure decades ahead of schedule—is cited as a counterexample. Critics call comparisons to agriculture and cities misleading since those are essential uses, and fault the analysis for omitting water embedded in electricity generation. Chemical runoff from cooling systems (biocides, heavy metals) is flagged as an overlooked concern. Commenters note aggregate figures obscure local bottlenecks and growth projections matter more than current baselines. Using AI chatbots as the primary citation source for estimating AI water use is widely seen as undermining the analysis. A few note data center electricity demands and generator pollution pose larger risks than water use.

HN's monthly "Who wants to be hired" thread for May 2026 invites job seekers—not recruiters or agencies—to post structured profiles with location, remote preference, relocation willingness, tech stack, resume link, and email. Readers are instructed to contact posters only for work opportunities. External aggregator tools at nthesis.ai and wantstobehired.com are linked for searchers browsing candidates.

Comments: Dozens of engineers posted profiles spanning DevOps/SRE (AWS, GCP, Kubernetes, Terraform), backend (Rust, Go, Python, Elixir), AI/ML (LLM tooling, PyTorch, RAG, interpretability), mobile (iOS/Swift, Android/Kotlin), full-stack web, and niche areas including zero-knowledge proofs, robotics, embedded systems, mathematical finance, and operations research. Experience ranges from new graduates to 30-year veterans. Locations span the US, Canada, Europe, South America, India, and beyond, with most preferring remote work. Notable profiles include a former Uber Cadence developer advocate who onboarded 50+ teams on billion-daily-op workflows, a systems engineer behind a 3.3B notifications/day engine at 99.9999% reliability, a PhD mathematical finance specialist with 20+ years in capital markets, a robotics engineer with autonomous vehicle and industrial harvester deployments, and a data scientist with 20+ years in logistics, oil and gas, and criminology applications.

Understand-Anything is a Claude Code plugin using a multi-agent pipeline to build interactive knowledge graphs from codebases, saving results to .understand-anything/knowledge-graph.json. Seven specialized agents handle file discovery, function/class/import extraction, architectural layer identification, guided tour generation, graph validation, business domain extraction, and wiki entity analysis. File analyzers run in parallel batches of 20-30 files with incremental updates on subsequent runs. The dashboard visualizes the codebase color-coded by layer (API, Service, Data, UI, Utility), supports semantic search, shows uncommitted-change impact analysis, adapts by persona (junior dev, PM, power user), and explains 12 programming patterns in context. A domain view maps code to business processes; a knowledge mode processes Karpathy-pattern wikis via wikilink parsing plus LLM-discovered implicit relationships. The JSON graph can be committed for team sharing, supports git-lfs for large graphs, and auto-updates via post-commit hook. It works across Claude Code, Codex, Cursor, VS Code/Copilot, Gemini CLI, and other AI coding platforms.

Comments: Commenters broadly question whether knowledge graph visualizations produce genuine code understanding, arguing real intuition forms through hands-on exploration — reading code directly — not polished AI summaries. Many compare the output to Obsidian's graph view: visually impressive but practically cumbersome and low-value, with large spaghetti-node graphs seen as actively unhelpful. A philosophical thread runs through comments: outsourcing thinking to AI may impede understanding, and educators reportedly see students defaulting to AI summaries instead of wrestling with material themselves. At least one commenter directly alleges fake GitHub stars, questioning the project's 9.7k count. Some find the tool over-engineered compared to simply asking an LLM "where do I start?", and the repo's many dot-folders prompted one reader to mistake it for satire. One commenter shared a competing tool using interactive mermaid-diagram walkthroughs. The broader frustration is fatigue with vibe-coded AI tools that overpromise while delivering visualizations unlikely to match any individual developer's actual mental model.

Orion's flight control architecture uses four Flight Control Modules, each a self-checking processor pair, giving eight CPUs running identical software in parallel. The fail-silent design silences any CPU producing erroneous output from a radiation event, then resets and resynchronizes it mid-flight automatically. All FCMs receive identical inputs and code, with clocks recalibrated every second to network consensus time; deadline-missing modules are silenced and rejoined. Triple-modular-redundant memory self-corrects single-bit errors on every read, dual-lane NICs catch bit flips before they corrupt commands, and the network itself has three independent planes with self-checking switches. A separate Backup Flight Software system on different hardware, a different OS, and independently developed simplified code guards against common-mode failures like software bugs affecting all primary channels simultaneously. Even total power loss is survivable: the craft stabilizes, points solar arrays at the Sun, achieves thermal stability, then re-establishes communications, with crew able to manually configure life support or don spacesuits throughout recovery.

Comments: Users note that dissimilar redundancy—building across multiple Linux distros and BSDs—surfaces undefined behavior invisible on a single target, mirroring Airbus's use of different CPU families to avoid hardware-level bugs. Some raise the counterpoint that layered redundancy increases system complexity, and cite aircraft accidents where complexity was an indirect cause of failure rather than a safeguard; quantifying prevented accidents against induced risk is noted as genuinely difficult. Others question how engineers determined eight CPUs as the right number and how a self-checking pair resolves disagreement between processors when outputs diverge. Technical readers note that lockstep microcontrollers in automotive safety components operate on the same principle at a smaller scale. Several commenters call for empirical fault data—graphs of in-sync FCM counts over time correlated against predictions—to assess whether the design is appropriately or over-engineered. Astronaut training demands for manually managing failure scenarios are also briefly flagged.

The Peabody Hotel in downtown Memphis, Tennessee has hosted a famous Duck March since 1933, when the hotel's general manager drunkenly released his hunting decoy ducks into the Grand Lobby fountain after a failed hunting trip. The sensation led to five resident North American Mallards being installed permanently, and by 1940, bellman Edward Pembroke — a former circus trainer — began training the birds to perform the now-iconic twice-daily march along a red carpet to a marble fountain. Pembroke served as duckmaster for 50 years, raising the ducks' profile through appearances on The Tonight Show, People magazine, and Sesame Street before his death in 1994. Today, 29-year-old Anthony Petrina, a University of Memphis hotel management graduate, serves as the fifth duckmaster. His day begins at 8:30 a.m. with cleaning, feeding, and bathing the ducks, with the public march at 11 a.m. and retrieval at 5 p.m. The ducks stay for only 90 days to prevent over-domestication, then are released to a farm pond to migrate freely. They are housed in a $200,000 rooftop "Royal Duck Palace" during their stay, but Petrina deliberately avoids hand-feeding or petting them to preserve their wild instincts.

Comments: Commenters find the Duck March a charming example of meaningful human-animal interaction in an urban setting where such contact is rare. One commenter suggests the experience would be even more whimsical if a herding dog were involved, linking to a video of a border collie herding ducks — also in Tennessee — and noting the geographic coincidence with some amusement.

Spotify is rolling out 'Verified by Spotify' badges to identify human artists via signals like linked social accounts, listener activity, merchandise, or concert dates, with 99%+ of actively searched artists expected to qualify. The system prioritizes "important contributions to music culture and history" over content farms. Critics note the badge only confirms a human is behind the profile, not that the music itself is AI-free. Campaigner Ed Newton-Rex warns it could disadvantage artists lacking traditional markers like touring or merch, and argues Spotify should instead automatically label AI-generated music as some competitors do. Prof. Nick Collins notes AI involvement isn't binary but a spectrum, and such a system may favor established commercial artists over independents. Spotify has faced years of user backlash including a Leipzig developer building his own AI-music blocking tool and community forums demanding clearer labeling. Former CEO Daniel Ek said in 2023 he had no plans to ban AI content. The Velvet Sundown — once at 850,000 monthly listeners — drew controversy when revealed as a synthetic project; now disclosing AI involvement, their count has dropped to 126,000.

Comments: Users broadly view the badge system as a scammer filter — it verifies the human behind a profile but ignores AI-generated music from verified artists. Many demand filters to exclude AI music from recommendations, with some switching to Qobuz for manual curation. A financial conflict is raised: Tencent Music, a major Spotify investor, profits from AI content, creating incentive to promote it over royalty-paying human artists. The Pixiv precedent is cited: once penalties attach to AI labels, creators stop self-reporting, making voluntary tagging unenforceable. Many prefer the inverse — labeling AI music explicitly rather than verifying humans. AI music quality is questioned; prior tech like synthesizers produced genuinely new sounds, while current AI output imitates lowest-common-denominator pop. A generational divide is anticipated, with younger AI-native creators expected to normalize AI music over time. Virtual performers like Hatsune Miku complicate verification. Users also want the same labeling applied to podcasts, where AI-generated content is increasingly hard to detect.

Professor Sally McKee, a computer scientist and professor with affiliations at Cornell, Princeton, University of Utah, Chalmers, UVa, and the University of Siena, died suddenly in late April 2026. Remembered as brilliant, warm, and generous, she shaped careers and families alike—acting as matchmaker across her social network, with at least two marriages attributed to connections she facilitated. She contributed to distributed and reconfigurable computing, co-authoring multiple papers and naming the ERA (Embedded Reconfigurable Architecture) Project. She taught PhD courses internationally, attended conferences including MEMSYS, and spent time at Bell Labs. Colleagues recall her directness, humor, love of chocolate, pirates, and orange, as well as her instinct to help those in need—human or animal. Most tributes note her death came far too soon, with former classmates placing her in the class of 1981.

Comments: Commenters identify Sally McKee as the author of the landmark CS paper "Hitting the Memory Wall: Implications of the Obvious," which addressed computer memory bottlenecks in processor performance. Her archived personal website offers glimpses of her career as an itinerant CS PhD and professor. The memorial page's "Memory Wall" section—a deliberate play on her famous paper's title—serves as the space for public tributes. Notably, at least one commenter whose dissertation focused on the memory wall had never encountered her work, and another had never heard the term at all, suggesting her foundational contribution may not have received broad attribution despite its influence on the field.

Sarah Murphy uses John Dee's obsidian scrying mirror as a metaphor for LLMs, arguing that how people use AI reveals more about themselves than about any optimal practice. She catalogs wildly different usage patterns: some prompt LLMs with affirmations of worth, others celebrate solo shipping, VCs build complex management frameworks, and aspiring thought leaders run ideas through panels of AI luminaries. None can be proven superior, and most aren't transferable between users. Murphy's own "partner mode" system prompt reflects her personal transition to partnership after long solitude — proof, she admits, only of her own psychology. She extends the mirror metaphor to AI skeptics, who simultaneously dismiss AI as ineffective while fearing it as a capitalist or authoritarian weapon, confirming their priors either way. AI's capacity for endless patient personal attention explains its unprecedented adoption curve, while its drawbacks mirror societal inequality — likely amplifying income gaps rather than closing them. The only reality check Murphy offers: if you ship code faster with LLMs than without, you have a foothold in reality; otherwise, you're playing a solo roleplaying game.

Comments: Commenters reframe the mirror metaphor as opportunity rather than warning, arguing that introspective use of LLMs may be their most underexplored value. Instead of pointing AI outward with directives to build or write, users find greater return in turning it inward — using it to surface ideas lodged below conscious, verbal thought, extracting intuition and experience that would otherwise stay inarticulate. One commenter wrote a book on this approach after investigating why their results differed from peers. The broader point raised is that LLMs have distinct strengths and weaknesses, as do the humans using them, and that deliberate, careful pairing of both can exceed what either achieves alone — making the mirror not a distortion but a genuine amplifier when used with self-awareness.

The HP C2089A PostScript Cartridge Plus (1991) added PostScript Level 2 to LaserJet II/III via Adobe's own reference interpreter (v2010.118) on a 2 MB ROM, running on a Motorola 68000 at 8 MHz with 1 MB RAM. The retro-ps project emulates this hardware — M68K CPU, mainboard soft-traps, engine-done interrupts — to run that interpreter on the command line or client-side in the browser with no server. Unlike the original 300 DPI, fixed paper, and 0.25" margin constraints, retro-ps supports any DPI up to ~1450 on Letter, any paper size, and no margins; --lj3 restores original behavior. The emulator provides 16 MB of RAM (vs. the 68000's address-space limit), enabling high-DPI rendering without modifying Adobe's allocator. Adobe's halftone cell was hand-tuned for 300 DPI, so the emulator injects a DPI-scaled setscreen prolog to prevent chalky grayscale at higher resolutions. A clip limit in Adobe's code caps output at ~16,000 device pixels per axis, setting the practical DPI ceiling. Future targets include the Pacific Page P·E cartridge and the i960-based LaserJet 4M, which baked PostScript Level 2 directly into its formatter ROM.

Comments: Users flagged UX friction: the browser tab can silently enter a broken state that times out all code submissions, requiring a force-reload since soft-reload is disabled, and the Code textarea doesn't handle Tab key input despite PostScript examples relying heavily on indentation. PostScript learning resources were shared, including Adobe's Blue Book, Red Book (third edition), and Green Book, plus test .ps files that rendered correctly minus color support. A 502 Bad Gateway error was reported, suggesting the demo attracted heavy traffic. One user noted macOS removed built-in PostScript support from Preview.app in recent versions. Others drew comparisons to related work: a WASM-based jbig2 decoder hand-trimmed to 27 KB, and Sun NeWS — a PostScript-based windowing system — as another potential emulation target. Historical trivia surfaced: Type 1 font encryption was broken in 1987 by Harvey Grosser, an ex-IBM System 360 coder in Palo Alto. Some users expressed enthusiasm for PostScript hacking generally, while others posted jokes referencing "PC LOAD LETTER" and Adobe's subscription pricing model.

Ubuntu and Canonical servers have been offline since Thursday morning following a sustained DDoS attack by a pro-Iran group using the Beam stressor service, which markets itself as load testing but functions as a paid takedown platform. The outage prevents normal access to Ubuntu webpages and OS updates, though mirror sites hosted by universities and third parties continue to function. Canonical confirmed the attack on their status page as a "sustained, cross-border attack" while otherwise maintaining radio silence. The timing is notable: the outage follows a botched disclosure of CVE-2026-31431, dubbed CopyFail, a critical vulnerability allowing any unprivileged Linux user to escalate to root. The same group recently claimed DDoS attacks on eBay. The attack is widely believed to be strategically timed to delay patch distribution for CopyFail, compounding the vulnerability's impact across the world's most popular Linux distribution.

Comments: Commenters widely speculate the DDoS timing is deliberate, designed to delay patch delivery for CVE-2026-31431 (CopyFail), a critical privilege escalation flaw allowing any unprivileged user to gain root on Ubuntu systems. Several note that apt mirror infrastructure, distributed across universities and third parties, remains functional and provides a viable workaround for updates. Questions arise about why Ubuntu's cloud-hosted infrastructure lacks adequate DDoS mitigation, with some suggesting an architectural design failure. Canonical's "cross-border" framing drew criticism, with users pointing out that large DDoS attacks typically originate from globally distributed compromised IoT devices, making national-border framing misleading. Multiple posts identify the thread as a duplicate of other active HN discussions on the same topic. One commenter jokes that the attackers' ransom demand was "no more systemd."

An immigration attorney hosts a recurring Hacker News AMA on US immigration law. The session addresses the new $100,000 H-1B fee — its scope, applicability to those already in the US, and likelihood of renewal or legal challenge. PERM labor certification's requirement to advertise positions that aren't genuinely open draws ethical questions from managers who cannot actually hire qualified applicants. Processing delays are widespread across N-400 citizenship, EB-1A, EB-3, I-130 spouse petitions, and EB-5 I-829 RFE cases. F-1 STEM OPT self-employment viability — LLC-to-S-Corp conversion, W-2 requirements, pre-OPT business formation — remains a contested gray area. TN and L-1 conditions, O-1 difficulty for early-stage founders, and family-based green card timelines are also covered. The Trump administration's enforcement climate generates concern about detention of lawful residents, citizenship stripping, and safe international travel. Advance parole re-entry risks and H-1B status preservation during travel are recurring themes. The attorney declines specific legal advice without full case facts but addresses general policy trends.

Comments: Questions cluster around the $100k H-1B fee — applicability to current US residents, economic feasibility, employer repayment clauses, and likelihood of legal challenge. PERM's fake job posting requirement confuses managers who cannot hire qualified applicants anyway, raising ethical concerns about the process and disrespect to genuine candidates. Green card holders ask about advance parole risks, post-approval job changes, and residency requirements. F-1 students probe STEM OPT self-employment legality, solo LLC W-2 structures, and whether pre-OPT business formation counts as unauthorized employment. Visa holders on H-1B, L-1, TN, E-3, and O-1 ask about switching categories and processing time increases. Several commenters express alarm about ICE detention of lawful residents and seek practical guidance. Canadian TN and E-3 interest is elevated as H-1B alternatives given the new fee. Processing delays for N-400, EB-1A, EB-5 I-829 RFEs, and I-130 petitions are noted across multiple independent cases. AI's impact on immigration legal practice is briefly raised, with concern about hallucination risk from legal AI tools.

A letter from Edsger Dijkstra criticizing APL is presented alongside a pointed rebuttal. The author finds the criticism ironic because Ken Iverson designed his notation as a human communication tool first, and Dijkstra encountered "Iverson notation" in August 1963 before any computer implementation existed. The author argues APL suits Dijkstra's own pen-and-paper formal proof style, and that executability is an asset in formal methods courses as it "keeps you honest." Two examples demonstrate APL's power: a derivation of Ackermann's function via induction using Dyalog APL, yielding closed-form expressions for each recursion level, and an efficient index-of algorithm for inverted tables, where column-oriented storage reduces per-element overhead and speeds column access. The inverted-table derivation yields the concise function {(⍉↑⍺⍳¨⍺)⍳(⍉↑⍺⍳¨⍵)}, and the author contends equivalent derivations in other languages would be impractically long. The piece is written in honor of Ken Iverson's 93rd birthday.

Comments: Commenters note Dijkstra was broadly dismissive of most languages—calling FORTRAN "an infantile disorder," COBOL a mind-crippler, and APL "a mistake, carried through to perfection"—preferring Algol 60 and pen-and-paper proofs over REPL-driven development, making him unlikely to warm to LLM code generation either. Some wonder if his objections were partly practical, suggesting modern Unicode fonts rendering APL glyphs natively might have softened his stance. Others compare APL to Perl: both suffer from unconventional syntax creating adoption friction, with the pool willing to learn alien symbolic notation shrinking at each selection step. One commenter recalls writing APL for a 1978 undergraduate project—finding it powerful but nearly write-only—and never using it professionally again. APL's pioneering role in array-oriented and functional programming is acknowledged, with a Russ Cox advent-of-code APL session cited as a modern example. Several observers argue the article's own examples validate Dijkstra's criticisms rather than refute them, and the opening remarks about gadget-enamored users overlooking poor interfaces are compared pointedly to modern LLM enthusiasm.

NHS England's SDLC-8 policy directed all NHS repositories to hide their source code, reversing commitments in the UK Government Design Principles and NHS Service Standard Principle 12, both of which require publicly funded code to be open. An open letter with 236 signatures calls on NHS England to withdraw SDLC-8 and recommit to open source. The letter argues open source demands higher code quality, proactive vulnerability management, and structured risk processes — hardening security like immune exposure — while closed source substitutes obscurity for depth. The closure was reportedly triggered by AI security scanning concerns linked to an entity called 'Mythos,' but critics argue the code has already been scraped, AI tools analyze binaries and probe live websites equally well, and tens of thousands of NHS pages linking GitHub repos would require costly updates. Determined attackers are unimpeded by source code secrecy.

Comments: Commenters call the NHS closure security theater driven by non-technical managers avoiding blame, noting code already scraped by AI cannot be protected retroactively. The 'Mythos' AI scanning justification is challenged: AI tools probe live sites and analyze binaries equally well, making closure futile. Broader industry context shows Fortune 50 companies pausing open source contributions until AppSec can remediate vulnerabilities within 24 hours, down from 8–10 days. This reflects a structural OSS sustainability crisis where maintainers lack funding and organizations have neglected security for decades. Proposed fixes include open-core models, formal sponsorship, and commercializable licenses to pay contributors. Practically, Cloudflare blocked some users from signing the letter, and NHS bureaucratic timelines could take a decade to reverse course even with leadership will.

Iranian drone strikes roughly two months ago damaged three AWS data centers in the UAE and Bahrain, and an April 30 update confirmed that regions ME-CENTRAL-1 and ME-SOUTH-1 remain unable to support customer applications—pushing total downtime toward nearly half a year. AWS has suspended billing in those regions while repairs continue, having already waived all March 2026 charges at an estimated cost of $150 million. The company strongly recommends customers migrate workloads to unaffected regions and use remote backups to recover inaccessible resources. Some customers, such as Dubai-based super app Careem—offering ride-hailing, food delivery, and household services—completed overnight migrations and quickly resumed operations on other servers.

Comments: Commenters note that data centers have become attractive military targets, with cheap drones capable of inflicting billions in damage at low human cost—and that major governments almost certainly know the locations of facilities AWS keeps publicly secret. The billing suspension is widely characterized as the bare minimum rather than generosity, since customers cannot use infrastructure that is down. Some express surprise that reportedly only 19 server racks were damaged given the destructive potential of large long-range drones like the Shahed, which have been seen collapsing entire buildings in other contexts. A former AWS employee notes that early disaster planning centered on natural disasters and never seriously contemplated wartime targeting. Other comments touch on the broader geopolitical conflict, reference the war's reported end, and one wryly notes the appeal of AWS's normally expensive egress charges being waived as an unintended side benefit.

Business websites are functional tools meant to serve visitors, not reflections of a founder's or decision-maker's personal taste — that's the central argument. Decision-makers frequently override designers' research-backed recommendations because they feel emotionally invested in the brand, treating the site like personal art rather than a conversion instrument. The author draws a medical analogy: just as patients defer to surgeons' expertise, business leaders should defer to designers' user research. In practice, designers often concede to client preferences to preserve the relationship, resulting in sites optimized for internal approval rather than user goals — effectively mood boards for leadership. The prescribed fix is simple: in design reviews, ask whether a proposed change helps the user or helps the decision-maker, and let data-backed designer answers take precedence over personal taste. The piece is written from the perspective of a developer who builds what designers hand off, lending it a front-row-seat credibility to the boardroom dynamic it describes.

Comments: Commenters challenge the premise on several fronts. Personal website owners note their sites are explicitly for themselves, and many reject the "website isn't art" framing as myopic, arguing brand identity and aesthetic choices genuinely shape user trust. Several push back on the designer-as-expert framing, contending designers are often portfolio-driven and may understand the customer less than founders with years in the market — making founder override sometimes correct. The HIPPO (Highest Paid Person's Opinion) dynamic is widely acknowledged. One ironic observation: the author's own site displays a vanity clock showing their local time. Others extend the critique to endemic industry-wide UX failures — autoplay video, hijacked back buttons, modal overload — suggesting poor decisions pervade regardless of who holds design authority. Many note the title misled them into thinking the post addressed personal sites; once they understood it targets commercial sites, they largely agreed. One commenter found that cutting 60% of landing page copy — eliminating self-reassurance in favor of user confrontation — validated the article's core point from personal experience.