Table of Contents

Hacker News

An artist spent years creating 1-bit pixel art recreations of Hokusai's 36 Views of Mount Fuji, working on actual vintage Macintosh hardware — specifically a Quadra 700 or PowerBook 100 running System 7 — at the original Mac screen resolution of 512×342 pixels using Aldus SuperPaint 3.0. The project, now stalled five years in, was motivated by nostalgia and a desire to capture the flow state of creative work, drawing inspiration from Susan Kare's iconic "Japanese lady" MacPaint art. The first shared piece is "The Great Wave off Kanagawa," which the creator notes was actually the second or third print tackled in the series. The strict resolution constraint was self-imposed to preserve authenticity. A desktop wallpaper version in PNG and PICT formats is available for 640×480 displays, and the work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 with a request for credit and backlinks when shared.

Comments: Commenters praise both Hokusai's extraordinary artistry — noting that zooming into archive.org scans reveals breathtaking economy of brushstroke and mastery of suspended motion — and the creator's dedication to severe technical constraints, arguing that 1-bit pixel art forces pure compositional solutions that color and higher resolution allow artists to avoid. The work is widely celebrated as a meaningful counterpoint to AI-generated art, exemplifying high-effort human creativity for its own sake. Discussion arose around Mac emulation on Apple Silicon for those wanting to try similar projects, and links were shared to MacPaint art from the mid-1980s and a site tracking seven current worldwide exhibitions of original Great Wave prints. Others noted curiosities: the CC NoDerivatives license's questionable enforceability against a public-domain source, a tip that inverting the image horizontally aligns with Japanese right-to-left reading, a mention that Mt. Fuji is easy to miss in the background, and that Hokusai also produced erotic art. The 8Bit Photo Lab app was suggested for automated 1-bit conversion on iOS and Android.

F.A.T. Lab and Sy-Lab released the Free Universal Construction Kit, ~80 3D-printable adapter bricks enabling interoperability between Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K'Nex, Krinkles, Lincoln Logs, Tinkertoys, Zome, and Zoob. Connectors were reverse-engineered with an optical comparator accurate to 2.54 microns, then released as free STL files on Thingiverse. Consumer-grade printers (Makerbot: 100-micron XY, 360-micron layers) may fall short of Lego's sub-10-micron tolerances; commercial Objet polyjet printers (42-micron XY, 16-micron layers) produce better results. The project deliberately targets toys with expired patents, delaying Zoob and Zome adapters until 2016 and 2022. Framed as "the VLC of children's playsets," it's a grassroots interoperability remedy bypassing proprietary lock-in. Licensed CC-BY-NC-SA, creators prohibit commercial mass production but permit individuals to contract fabrication bureaus like Ponoko or Shapeways for personal copies, asserting home printing qualifies as fair use under the "Must Fit Exception."

Comments: Users note the project's humorous acronym (F.U.C.K.) while raising substantive concerns. The CC-BY-NC license draws criticism from open-source advocates who argue the NonCommercial clause prevents paying a print shop to fulfill orders, and that CC-BY-SA alone would deter commercial exploitation while remaining truly open. The legal question of whether withholding Zoob and Zome designs fully resolves patent infringement risk is also debated. A practical concern surfaces: at least one user reports Ponoko rejected an upload as "too tiny" and flagged the universal adapter brick as having a non-manifold mesh, raising questions about file print-readiness. Several users reflect on toy longevity — Lincoln Logs date to 1918, Lego to 1945 — noting interoperability extends their value across generations. The omission of Construx is flagged, and commenters joke that corporate lawyers will be summoned by the project's branding. Some note this is a 2012 project resurfacing for renewed attention.

Japan's National Diet Library Digital Collection is an online service for searching and viewing digital materials held by the National Diet Library. The featured item appears to be 北斎模様画譜 (Hokusai Pattern Book, 1884), originally published as a reference for kimono textile design. The book was largely forgotten until 1986, when it was rediscovered in the Boston Museum of Fine Arts collection, after which Japanese art historians located additional prints. The work is now accessible via the NDL's digitization efforts, which aim to preserve and provide public access to rare historical materials. A copy is also available on Wikimedia Commons, offering an alternative access point for those outside Japan or without Japanese-language reading ability.

Comments: Commenters note that 北斎模様画譜 (1884) is available on Wikimedia Commons, clarifying its origins as a kimono textile pattern book that was rediscovered in 1986 at the Boston Museum before further prints were found by Japanese art historians. One commenter shares a poignant quote from M.C. Escher in his sixties, in which Escher expresses deep admiration for Hokusai's wave imagery, lamenting his own inability to capture waves and deferring to Hokusai and Japanese artists as the true masters of the form. Another commenter simply asks whether non-Japanese speakers can access or experience the material, highlighting a practical barrier of language and geographic accessibility that the NDL's digital interface does not fully address for international audiences.

Martin Galway, composer of iconic Commodore 64 game soundtracks, has released original assembly source files for his 1980s music players, inviting the public to reassemble, modify, and create new music with credit. Galway now owns the copyright, having acquired the rights from Infogrames after the original work-for-hire period. The release covers two SID player generations: the 1st Generation driver (1984–mid-1987, used in Wizball) and the 2nd Generation driver, first written for Athena and later used in Times of Lore and Insects in Space. The files help enthusiasts analyze how the players were structured. SID's expressive power came not from note sequences alone but from per-frame register manipulation — sweeping filter cutoff, gating ring modulation between voices, and retriggering ADSR envelopes mid-note — all driven by 50Hz interrupt routines on the 6510 CPU. The source files exceed C64 RAM capacity, indicating cross-assembly on external tools rather than native on-hardware development.

Comments: Commenters respond with nostalgia for Galway's work — Wizball, Parallax, Green Beret, Rambo — alongside deep technical appreciation. The core insight is that SID magic lies not in notation but in per-frame register scheduling: sweeping filter cutoff, gating ring modulation, and retriggering ADSR mid-note at 50Hz interrupts. The .sid format preserves the 6510 driver code, but converting it to pattern notation discards the actual sound design. Attempts to translate Wizball into Strudel JS or Tidal Cycles are discussed; AI transcriptions can reproduce the melody but not the timbre, since the sound IS the register schedule, not the notes. One user clarifies assembler directives in the source — DSP likely means "displacement" (paired with ORG to relocate code in memory), and DFC generates PETSCII rather than ASCII, similar to DFM. Several note the source files exceed C64 RAM, confirming external cross-assembly. SLAYRadio is cited as a long-running C64 music resource, and DeepSID is linked for streaming Wizball subtunes.

Desmond Morris, zoologist, author, surrealist painter, and TV presenter, died April 20 aged 98. Best known for "The Naked Ape" (1967), written in four weeks, the book framed human behavior through a Darwinian zoological lens rather than a cultural one, eventually selling 20 million copies. Morris studied ethology at Birmingham University, became curator of mammals at London Zoo, gave paintbrushes to a chimp named Congo whose work later sold for thousands, and unsuccessfully tried to breed giant pandas in captivity. The book was controversial: feminists objected to his portrayal of men as risk-taking hunter-gatherers driving evolution, others disputed his framing of religion as submission to an alpha male, and critics called it "salacious guesswork." He later wrote "The Human Zoo," "Manwatching," and "Intimate Behaviour," produced the 1994 TV series "The Human Animal," and turned down involvement in what became "Big Brother." When invited to update "The Naked Ape" as genetics advanced, Morris only changed Earth's population figure. He is remembered as a major popularizer of science who helped situate humans within the broader natural world.

Comments: Commenters remember Morris warmly across multiple dimensions of his work, with several noting his lesser-celebrated book "Catwatching" as a thoughtful observational study of cats, and others confirming he also authored "Manwatching" (later reissued as "Peoplewatching"). One commenter reflects that his anthropological conclusions could be arbitrary but were consistently provocative in challenging assumptions about human progress from primitive origins. A link to a notable archival TV clip is shared as a tribute to his broadcasting legacy.

Meta AI researchers argue parameter count and computation should be treated as independent model design dimensions, not conflated. Hash Layers use a sparse mixture-of-experts (MoE) approach assigning each vocabulary token to a fixed expert via hashing — no learned routing needed — so models with 1.28B parameters activate only 17% per input, outperforming the Switch baseline on Reddit language modeling; at 4.5B parameters, they beat BASE with 11% better training throughput. Staircase and Ladder models take the opposite tack, increasing computation without adding parameters by reapplying the same Transformer block repeatedly: Ladder stacks identical Transformers, while Staircase shifts each block forward in time, forming a recurrent architecture that maintains internal state. Feedforward Transformers and Ladder models struggle on persistent-state tracking tasks; Staircase solves these easily while sharing Ladder's compute-per-parameter boost on language modeling. Combining Hash Layers with Ladder yields orthogonal improvements over either alone, enabling fine-grained independent control over parameter count and computation budget.

Comments: The sole comment draws a parallel to independent research on LLM internal layer structure, referencing work that identifies redundant "thinking layers," removes duplicates, and recombines them back-to-back to improve benchmark scores with minimal overhead — conceptually resonant with the article's thesis that architectural manipulation beyond simple parameter scaling yields meaningful gains.

Canal Plus launched in France on November 4, 1984 as a subscription TV channel, encrypting its SECAM signal with "Discret 11." Rather than true digital encryption, Discret 11 shifted each of the 576 scan lines rightward by 0, 902ns, or 1804ns, padding left with black, using a Linear Feedback Shift Register seeded with an 11-bit key; content stayed within the "title-safe" area for lossless decoder reconstruction. Audio was scrambled keylessly via AM at 12.8 kHz by swapping high and low frequency bands. Subscriber decoders hashed an 8-digit monthly code with each unit's EEPROM serial number into a 16-bit key, producing six 11-bit keys for audience levels, with line 622 luminance blinks indicating the active level. A 7th "free mode" at month-end always used key 1337. Schematics leaked in December 1984 after a court order barred Radio Plans from publishing them, and "Le quotidien de Paris" published them anyway; electronics stores openly supplied pirate decoder parts. Discret 11 was replaced by Nagravision in 1992 and retired by 1995, while Canal Plus itself became a major European satellite broadcaster.

Comments: Commenters share personal memories of Discret 11-era piracy: one brute-forced the system at age 12 using a Mac with PAL-SECAM cards and a free CodeWarrior CD from a Paris convention, while another's father built a pirate decoder from photocopied schematics passed among electronics colleagues before the web existed. Several note that determined teenagers could partially decode scrambled video by squinting and learn to interpret the distorted audio through repeated exposure, useful on late-night weekend TV. The embedded "1337" (leet) free-mode key is highlighted as a notable Easter egg. UK viewers compare the system to VideoCrypt, while one notes Poland used a visibly different Canal+ encryption scheme. Some push back on "did not operate for long," arguing 11 years is substantial. The piracy arc is characterized as continuous: cheap North African pirate decoders gave way to updatable satellite smart cards, and today services like Stremio aggregate streaming catalogs freely, suggesting piracy continuously adapts to new distribution models. Electronics store employees who supplied decoder components are praised as an act of civil disobedience.

grdpwasm is a browser-based RDP client combining a Go WebAssembly frontend with a lightweight Go proxy server that bridges WebSocket connections to RDP's TCP port, since browsers cannot open raw TCP sockets directly. Built with Go 1.24+, running make all produces three artifacts: a WASM binary, a JS runtime support file, and a combined WebSocket-to-TCP proxy and static file server. Users navigate to localhost:8080, enter host, port, domain, credentials, and resolution, then click Connect to see the remote desktop rendered in a canvas element. Full keyboard input is forwarded via RDP scan codes, and mouse move, button clicks, and scroll wheel are all supported, though the browser tab must have focus for key events to register. Remote audio streams over RDPSND through the browser's Web Audio API at 44100 Hz, stereo, 16-bit PCM. Security caveats are notable: the proxy accepts connections from any origin and transmits credentials over WebSocket, so HTTPS/WSS via a TLS-terminating reverse proxy such as nginx or Caddy is strongly recommended on untrusted networks. The project is licensed under GPLv3.

Comments: Commenters quickly identify clipboard sharing as the most critical unmentioned feature, with one detailed breakdown explaining that the browser Clipboard API gates writes behind user-gesture requirements and prompts users on every read in most browsers — meaning paste-into-RDP works smoothly but paste-out-of-RDP requires an extra click each time, and behavior may differ between Chrome and Firefox. Alt-Tab key capture is raised as another longstanding browser-RDP pain point, previously problematic in Guacamole, since the browser intercepts that shortcut before it can be forwarded to the remote session. Some users question the practical value given that native RDP clients ship on virtually every major platform. A niche compatibility question is raised about whether the tool supports opening RDP files from CyberArk PAM.

LamBench (λ-bench) by VictorTaelin benchmarks 120 pure lambda calculus problems where AI models implement algorithms in Lamb, a minimal lambda calculus language, in a single one-shot attempt per problem. Models receive a problem description, data encoding spec, and test cases, and must return a .lam program that passes all input/output pairs. Top-lab models cluster near the top while smaller and cheaper models fall far behind, countering recent "opus killer" marketing. All models universally fail FFT because Cooley-Tukey requires mutable arrays and integer indexing — in pure lambda calculus, Church numeral index lookups are O(N) each, degrading the algorithm from O(N log N) to O(N^2 log N) or worse, with no structurally similar training examples available. Version regressions appear: GPT-5.5 scores noticeably below GPT-5.4, and Opus 4.7 slightly below Opus 4.6. Critics argue the single-shot methodology underrepresents true model capability, as non-deterministic probabilistic models need roughly 45 samples per problem to properly characterize their distribution.

Comments: Top-lab models cluster tightly at the top while smaller and budget models lag far behind, undermining frequent "opus killer" marketing — even DeepSeek's massive 1.6T model reportedly still trails Opus per DeepSeek's own researchers, yet a 27B dense model has been marketed as equivalent. The single-shot methodology draws criticism: properly benchmarking non-deterministic probabilistic models requires roughly 45 runs per problem to capture distributional behavior, not one attempt. The universal FFT failure is explained technically: Cooley-Tukey requires mutable arrays and shared butterfly state, but in pure lambda calculus those become O(N) Church numeral lookups per index, turning O(N log N) into O(N^2 log N) or worse, and training data contains no encoding-aware implementations to learn from. Version regressions are flagged: GPT-5.5 noticeably underperforms GPT-5.4, and Opus 4.7 slightly trails Opus 4.6 on this benchmark. Users also note a broken live-results link caused by URL case sensitivity (LamBench vs lambench) and request Mistral model results be added.

Plain text and ASCII diagramming tools are seeing renewed interest, with Mockdown (web/mobile), Wiretext (web/desktop-only), and Monodraw (Mac-only) offering intentionally constrained visual choices for embedding diagrams in source code and as entry points to generative AI. These tools represent a contemporary take on text-based interfaces that peaked in the 1970s–80s, now updated with mouse/trackpad support, web access, and modern performance. The core appeal is constraint: limiting visual choices reduces friction and forces focus on data structure over presentation. Plain text's portability and the universality of text editing as an interface give it lasting power—users compare it to SQL and TCP/IP in longevity. The tools are technically "ASCII" only in a colloquial sense, often relying on extended Unicode box-drawing characters. Self-imposed constraint is framed as increasingly valuable: as AI capabilities grow, deliberately limiting inputs may matter as much for making things harder as computers already make them easier.

Comments: Users broadly celebrate plain text's durability with concrete workflows: one replaced QuickBooks with Beancount+Fava, adding git-attested commits and RFC3161 timestamping for audit-proof accounting, while others maintain 20+ years of notes and invoicing entirely in plain text using custom CLI tools. Historically, commenters push back on the "peaked in 1970s–80s" framing, noting early 1990s DOS apps with VGA text modes and mouse support—QBASIC, EDIT.COM—as a high-water mark. Additional ASCII tools are surfaced: asciiflow.com, asciidraw.github.io, and Emacs' artist-mode. Critics note that plain text's simplicity trades away enforced structure—yaml files vs. a DBMS being the canonical tradeoff—and that "plain text" is routinely conflated with extended Unicode characters, raising accessibility concerns for screen reader users. Dylan Beattie's "There's no such thing as plain text" talk is cited as a worthwhile counterpoint. The practical consensus is that UTF-8 plus light conventions (Markdown, JSON) constitutes a "good enough standard," with lack of enforced schema remaining plain text's main weakness at scale.

Niri, a scrollable-tiling Wayland compositor, ships its most-requested feature — blur — in mainline, supporting efficient "xray" mode (static blurred wallpaper) and non-xray mode; foot, kitty, and Ghostty already implement the ext-background-effect protocol. Screencasting gains cursor metadata (enabling OBS cursor toggle), a Cast IPC for monitoring active sessions via niri msg casts, and a dynamic cast target delay fixing Microsoft Teams compatibility. A rendering refactor replaced Rust pull-based iterator chains with push-based closures, eliminating temporary allocations and yielding 2-3x speedup on modern hardware and 8x on an old Eee PC. GPU profiling via Tracy was integrated into Smithay, enabling GPU zone visibility for diagnosing frame drops. Animation sync bugs in unfullscreen/unmaximize were fixed, IME now works in GTK 4 pop-ups, and Escape cancels drag-and-drop. Optional config includes (optional=true) and tilde path expansion were added. A wrong OpenGL enum in Smithay was corrected to restore screenshots on older Intel laptops. The project moved to a GitHub org for issue triage delegation and crossed 20,000 stars.

Comments: Users widely praise Niri, many having migrated from i3, sway, or Windows, sharing dotfiles, NixOS flakes, and shell configs built around it. OmniWM is highlighted as a Mac alternative offering a Niri-inspired scrollable layout, drawing interest from macOS users wanting native trackpad integration and blur. Users from traditional tiling WM workflows ask how the mental model shifts in infinite-strip navigation, while one ex-macOS user found spatial window tracking mentally taxing. Pain points cited include missing xwayland-satellite drag-and-drop between X and Wayland apps, HDR not yet available, and no native macOS port. Niri is compared favorably to Hyprland for API stability, and lighter alternative mangowm is mentioned. A question surfaces about per-monitor independent virtual desktop cycling, with others noting Niri treats each monitor as a separate strip by design rather than a unified surface — a choice some find counterintuitive. Dank Material Shell and awesome-niri are noted as fast paths to a complete desktop environment. PaperWM for GNOME is mentioned as a related inspiration, and one long-time i3 user describes switching to Niri after a decade as freeing.

Project Eleven awarded 1 BTC for a submission claiming quantum ECDLP key recovery on IBM hardware for 17-bit curves. A researcher swapped the IBM backend in solve_ecdlp() with os.urandom, leaving the ripple-carry oracle and d·G == Q verifier untouched. Every key from 4-bit through 17-bit was recovered identically via random bitstrings on a laptop. The pipeline accepts d = (r − j)·k⁻¹ mod n when the classical verifier passes, so P(≥1 hit in S shots) = 1 − (1 − 1/n)^S under uniform noise. For the 4–10 bit cases, shots/n is 1.9×–1,170×, making urandom success near-certain; for the 17-bit prize case (n=65,173, shots=20,000), theoretical urandom success is 26.43% and empirical runs matched ~40%. The author's README even warns that shots >> n lets random noise recover the key alone. The underlying engineering — CDKM ripple-carry adders on heavy-hex topology, six oracle variants, semiclassical phase estimation — is genuine, but no quantum signal contribution to key recovery was demonstrated.

Comments: One commenter predicted this in a SIGBOVIK 2025 paper, arguing that for small-n, Shor's algorithm "succeeds" on random samples and noisy quantum circuits effectively become random number generators. Users stress this is a failure of Project Eleven's validation, not a critique of quantum computing broadly — a 17-bit key (131,072 possibilities) is trivially brute-forceable classically. "Dequantization" — testing whether results are classically reproducible — is noted as a legitimate quantum information research area, with a recent related arxiv paper cited. When circuit depth exceeds hardware coherence, the quantum device mimics a uniform random sampler, making small-n cases poor progress benchmarks. Quantum grifting in cryptocurrency markets is flagged as a related concern. A minority broadly dismiss quantum computing, while others counter that early experiments are physics demonstrations — scientifically valid even when slower than classical alternatives, analogous to pre-breakeven fusion.

Framework's Laptop 13 Pro is a ground-up redesign with a CNC-machined aluminum chassis that fixes the screen wobble of earlier models. The 1.4kg laptop pairs a 74Whr battery (20% larger than its predecessor) with Intel Core Ultra Series 3 (Panther Lake) chips—PCIe 5.0, Wi-Fi 7, Arc B390/B370 graphics—for a claimed 20-hour battery life; AMD Ryzen AI 300-series boards are also available. A custom 2880x1920 IPS display delivers 700 nits, 30-120Hz VRR, touchscreen, and per-unit color calibration. A haptic LiteOn touchpad with four piezo elements frees internal space for the larger battery. RAM shifts to user-replaceable LPCAMM2 LPDDR5X-8533 in 16/32/64GB. Most parts remain backward-compatible with older Laptop 13 chassis, except the keyboard/trackpad/battery trio, handled via a Bottom Cover Upgrade Kit. Framework ships Ubuntu pre-installed for the first time, and the Linux version outsold Windows at launch. Pricing starts at $1,199 DIY or $1,499 Ubuntu pre-built, with shipping in June 2026.

Comments: Users highlighted several discussion points. The expansion card bay redesign drew interest, with hopes for set screw locking—like the F11—to prevent cards from dislodging when removing USB cables. Price comparisons showed Framework's Ultra X7 configurations running notably higher than MacBook Pro 14 M5 equivalents in UK pricing. The Ubuntu version outselling Windows at launch was seen as validating the niche but commercially viable Linux market. Users questioned Intel vs. AMD trade-offs for coding, compiling, and local LLM workloads, and whether either offers unified memory comparable to Apple Silicon. Critics noted Framework lacks open-sourced firmware unlike NovaCustom, StarLabs, and System76, despite community requests, weakening its "Linux-first" claims. Some questioned why a Linux laptop aims to clone Mac aesthetics, arguing Linux users have actively rejected that design. Others preferred waiting for real-world reviews before preordering, and anticipated cheaper OEM alternatives to emerge.

A Clojure/Emacs programmer reflects on two project approaches — immediately building vs. researching prior art — arguing that clearly internalized success criteria is the pivotal factor. A woodworking shelf succeeded because the goal was simply to have fun with a friend. In contrast, 4 hours researching semantic diff tools triggered scope creep before the real goal was remembered: build a minimal Emacs diff workflow. A similar YAGNI moment arose building a fuzzy file search using Nucleo, where path-segment anchoring logic was built then discarded. The author proposes a "conservation law": LLM coding speed gains are offset by increased feature creep and rabbit holes. Key tools surveyed include difftastic (treesitter CST diffs, mismatches struct entities), semanticdiff.com (polished but no library access), mergiraf and weave (Rust treesitter merge drivers), diffast (tree edit-distance from 2008), and autochrome (Clojure-specific diffs). The planned minimal approach is to parse treesitter ASTs, diff entity lists, and show Magit-style entity-level staging in Emacs.

Comments: Commenters broadly resonate with the scope creep problem, comparing it to PhD research exhaustion and CIA sabotage manual tactics around endless committee review and analysis paralysis. Several cite Obama's "better is good" and CEO wisdom favoring shorter projects with earlier launches. A recurring theme is identifying one's "why" — learning, personal use, or commercial — to choose the right research-vs-build balance. The LLM "conservation law" resonates strongly: speed gains are offset by over-engineered complexity, as LLMs suppress the instinct to pause and reconsider. Feature flags are called out as subtle scope creep, with one developer solving it by requiring tests for flag-off behavior before landing features. Deadlines are cited as the most reliable antidote to scope creep. RefactoringMiner is noted as the newest structural diff tool, strong for Java. Some push back, arguing upfront domain modeling prevents costly redesign in unfamiliar domains. The consensus is that shipping a minimal v1 beats exhaustive planning, though domain knowledge determines the right balance.

HEALPix (Hierarchical Equal Area isoLatitude Pixelisation) is a sphere-pixelization algorithm devised in 1997 by Krzysztof Gorski and first published in 1998. It maps the 2-sphere to 12 equal-area quadrilateral facets using Lambert cylindrical equal-area projection for equatorial regions and interrupted Collignon projection for polar regions. Pixels at each level are equal in area with centers on latitude circles; the H=4, K=3 variant folds into a perfect cube with the Arctic Circle becoming a square. The scheme is efficient for spherical harmonic transforms and dominates CMB map storage in cosmology; ESA's Gaia mission uses it for source identification. Approved by the IAU FITS Working Group in April 2006, it carries the HPX keyword in the FITS standard. Unlike most spherical grids it is not polyhedron-derived; its 12 faces resemble a rhombic dodecahedron but are topologically incompatible with any genus-0 polyhedron. Alternatives include the Hierarchical Triangular Mesh and Quadrilateralized Spherical Cube. Implementations exist in C, C++, Fortran90, IDL, Java, and Python, supporting resolutions to 0.4 milliarcseconds.

Comments: Users draw parallels between HEALPix and Uber's H3 hexagonal grid, noting both solve the core problem of assigning a sphere-surface point to a stable cell ID despite differing algorithms. Practical uses cited include healpy for undergraduate astrophysics void-galaxy research. Users highlight HiPS (Hierarchical Progressive Surveys) maps, which layer progressively finer HEALPix grids for seamless sky zooming in tools like Aladin. HEALPix's indexing properties draw praise, with comparisons to geohashes for spatial indexing due to shared hierarchical locality characteristics.

A privacy-preserving approach to 3D body reconstruction uses 8 questions fed into a small MLP (~85KB, two 256-unit layers) outputting 58 Anny body shape parameters, running in milliseconds on CPU. Building on Bartol et al. (2022), the team augmented height/weight with build, belly, body shape, cup size, gender, and ancestry, reducing waist variation in fixed h/w buckets from ~9cm to ~1.3cm. A physics-aware loss includes Anny's differentiable forward pass so mass errors propagate through all 58 params jointly — something Ridge regression can't do since it solves each output independently. Results: 0.3cm height MAE, 0.4–0.5kg mass MAE, 2.7–4.9cm BWH mean — beating Bartol's h+w regression (~7cm BWH) and the team's own photo pipeline (5–8cm BWH). Two bugs were fixed: Anny's density constant (980 kg/m³) ignored gender and fat composition, corrected via Navy formula and Siri two-component model; and an ancestry blendshape mismatch between training and inference caused a 3kg noise floor, resolved by adding ancestry to the questionnaire. The system is live at clad.you, with interactive body tuning planned next.

Comments: Users note the strong results and suggest precomputing ~10M combinations could make responses nearly instant. A question asks why height/weight predictions aren't zero-error given known inputs — the model outputs a population-average body, not an exact reconstruction. The Levi's Personal Pair/Intellifit 3D scanning program (~20 years ago) is cited as a parallel that failed due to wrong target demographic, privacy concerns, manufacturing tolerances, and novelty framing; the questionnaire approach has looser tolerances and better privacy. Technically engaged readers praise the ancestry fix as elegant train/inference distribution alignment and the physics-aware loss as the right call for coupling 58 output params, with the "boring model, interesting data pipeline" lesson resonating with constrained-hardware developers. The "averages lie about tails" observation is called out as quotable and worth requiring in scientific papers. Missing torso-to-leg ratio coverage is noted as a limitation. Writing style is flagged as possibly AI-assisted or non-native English, though UX clarity earns praise.

Paraloid B-72, originally developed by Rohm and Haas as a surface coating and flexographic ink vehicle, has become a widely adopted adhesive and consolidant among conservator-restorers. Chemically an ethyl methacrylate–methyl acrylate copolymer, it is non-yellowing and soluble in acetone, ethanol, toluene, and xylenes. Its applications span ceramic and glass conservation, fossil preparation, piano hammer hardening, and museum object labeling. Key advantages include greater strength and hardness than polyvinyl acetate while remaining more flexible and stress-tolerant than most adhesives — and unlike cellulose nitrate, it requires no plasticizer additives. Acetone is the preferred solvent, though mixtures with ethanol and toluene adjust working time and final properties. Fumed colloidal silica can be added to improve workability by distributing stress during solvent evaporation. The primary drawback is handling difficulty: precise application is challenging, as with other acrylics. Most notably, conservator Stephen Koob of the Corning Museum of Glass pioneered using cast B-72 sheets as fill material in damaged glass objects.

Comments: Commenters highlight practical personal uses well beyond institutional conservation — one user successfully preserved surgically removed bones by soaking them in a 1:8 B-72-to-acetone solution for an hour, noting the porous nature of bone benefits from a thinner mix and that the material is inexpensive. Others ask whether the same technique would work for fragile shells or sand dollar skeletons, currently treated with thinned white glue. Some wonder about compatibility with 3D printing as a soluble support material (analogous to HIPS with d-limonene), and others ask how its strength compares to MMA structural adhesives. One commenter succinctly distills its appeal as "stronger, harder, less brittle, clear wood glue you can dissolve with acetone." Several note the post's niche appeal even by Hacker News standards, with requests for more conservator-focused content, and one user lightheartedly admits misreading the name as "Polaroid."

Panic, a Portland-based game publisher, launched a mail-in rewards program inspired by Activision's 1980s jacket-patch giveaways for players who proved in-game achievements. Players completing games like Thank Goodness You're Here, Arco, and Despelote find a credits link to an instructional comic strip, then mail a self-addressed stamped envelope to receive a themed patch. Artist James Carbutt drew the comic and added an unrequested panel encouraging notes to developers, opening an unexpected flood of creative correspondence. Over a thousand pieces arrived in the first month: needlepoint crafts, hand-drawn art, an iPod Nano with a custom playlist, a glitter bomb, $20 from a remorseful pirate, a wedding invitation, a dead fly, and one accidentally enclosed child's tooth. Marketing head Kaleigh Stegman manages daily sorting and return mailings, while co-founder Cabel Sasser says physical letters feel far more meaningful than online praise. Players wrote personal notes—one about health issues keeping them from soccer, another thanking developers for representing Ecuador authentically. Panic digitally archives all letters to share with development teams worldwide.

Comments: Commenters draw parallels and share admiration for Panic's program. One recounts working at an insurance firm opening returned mail-shots for over-50s life insurance policies, noting that while most envelopes contained ripped-up applications, a small percentage yielded unusual items such as a note claiming to have been farted on, toenail clippings, explicit magazine cuttings, deliberately fake submissions under joke names, and random jar labels—making an otherwise dull task more entertaining. Another expresses genuine admiration for Panic's software quality, the Playdate console, and their publishing choices, and mentions developing their own game with thoughts of potentially pitching it to Panic. A third commenter simply found the dead-fly submission the most amusing detail in the story.

Photographers Monique and Chris Fallows captured 304 individual humpback whales in one day off South Africa in December 2025—a global record. Industrial whaling had cut humpbacks to under 5% of pre-whaling numbers, but a 40-year-old moratorium spurred recovery, with southern hemisphere populations growing up to 12% per year. Super-groups (20+ whales within five body-lengths) soared from 10 to 65 annual sightings off South Africa between 2015 and 2020. Of 372 unique individuals identified over two days, most appear under 10 years old—newly born post-moratorium generations. Experts are uncertain whether super-groups reflect prey shifts, population growth, or behavior simply invisible when whale numbers were low. During aggregations, whales use chaotic lunge feeding through concentrated krill balls rather than coordinated bubble-net technique—"controlled chaos," per researchers. Happywhale uses AI image recognition across 1.5 million photos to track individuals and migration globally. Despite recovery, humpbacks still face threats from fishing gear entanglement, vessel strikes, noise pollution, and warming seas.

Comments: Commenters largely responded with humor, riffing on "super-group" as a musical term with jokes about prog rock bands and anticipated album releases. Several users invoked Star Trek IV: The Voyage Home—where humpback whales communicate with an alien probe—speculating the whales may be forming a delegation or army to respond to a space threat. Tolkien references appeared too, with one calling the gathering "the Entmoot of the sea." On substance, users praised the underappreciated role whales play in transporting nutrients across oceans as a driver of global marine ecosystem health. One commenter flagged mixed units in the piece, noting mph and km/h used in the same sentence while height used meters. Others expressed hope for whale-human communication technology analogous to Google's DolphinGemma, to help whales avoid hazards or seek help when entangled. Concern was raised about a stranded whale near Germany, and curiosity about how dolphins perceive these massive gatherings. The thread balanced playful cultural references with genuine appreciation for the conservation milestone.

Windows 2.x, released December 1987, was a graphical shell atop MS-DOS built in eight months by Tandy Trower, whose team included graphic designers — later influencing Microsoft UI through Windows 95. It added overlapping windows, movable desktop icons, and keyboard shortcuts cut from Windows 1.x. Two variants shipped: Windows/286 for legacy hardware and Windows/386 with protected mode, preemptive multitasking, and memory beyond DOS's 640 KiB limit. Development ran parallel to IBM's OS/2, with matching UIs so users could migrate; Windows 2.x was seen as temporary until OS/2 replaced DOS. In March 1988, Apple sued Microsoft and HP over the Macintosh GUI's "look and feel," but courts evaluated elements individually, found most uncopyrightable or derived from Xerox's Alto and Star, and Apple lost in 1994. Xerox sued Apple too but was barred for filing too late. Windows 2.1 added HIMEM.SYS and supported 127 printers; the 386 edition became the first Windows version acclaimed by critics and customers, paving the way for Windows 3.0.

Comments: Commenters note that Gabe Newell was lead developer of Windows 1, 2, and 3 before porting Doom to Windows ahead of Windows 95. One detailed comment disputes calling Windows a mere "GUI shell over DOS," arguing Windows controlled nearly all system resources — memory, processes, video, timers, input, and printers — leaving DOS only for disk I/O; it cites Raymond Chen's blog posts on real-mode multitasking tricks, including dynamically loading/unloading code and patching stack return addresses on the fly to stay transparent to software authors. Minor UX complaints include pre-collapsed article sections hiding content and a series header styled as a link but not linked. Some readers reflect that MS-DOS's apparent simplicity breaks down under multitasking or networking demands, and that modern computing has lost simplicity along the way. One invokes the classic quip that OS/2 was "a better DOS than DOS, and a better Windows than Windows." A new reader praised the article's nostalgic design and subscribed.

The RODECaster Duo audio mixer runs a full 64-bit Linux environment on an ARM SoC, discovered when the author reverse-engineered its firmware update process. The firmware is a plain gzipped tarball with no signature verification, copied to a temporarily mounted disk alongside an MD5 file. The device uses two redundant partitions so a failed flash boots the other. SSH is enabled by default with public-key-only auth but ships with hardcoded RODE-owned RSA and ed25519 keys. Using Wireshark with USBPcap on Windows, the author captured the HID update protocol: the RODECaster App sends ASCII 'M' over HID report 1 to enter update mode, then copies the archive, and sends 'U' to trigger flashing. Claude Code analyzed the pcap in roughly 10 minutes and produced a working Python script. The author then built custom firmware inside a container, adding their own SSH key and enabling password auth, and successfully flashed it. A vulnerability disclosure was submitted to RODE about the hardcoded keys, but no response was received. The author views the device's openness positively but hopes any future security tightening preserves user control.

Comments: Commenters broadly agree that signed and open firmware are not mutually exclusive, arguing vendors should allow owners to enroll their own keys rather than choosing between total lockdown and no verification at all. The Linux-on-ARM-SoC finding doesn't surprise many, who note vendor BSPs routinely ship with sshd running simply because nobody on the audio team owns the rootfs. The most pointed technical concern raised is whether the SSH daemon listens only on the USB-side network interface or on the physical LAN port — the former is merely annoying, the latter a real risk. Several commenters hope RODE doesn't respond to this disclosure by locking firmware updates down entirely. There is notable enthusiasm about AI-assisted hardware hacking: what once required elite reverse-engineering skill now takes minutes with an agent analyzing a pcap. One commenter flags the EU Cyber Resilience Act as regulatory pressure that could soon force vendors to close exactly this kind of openness. Mild criticism surfaces that the author was too measured in tone given a vendor was shipping hardcoded SSH keys into consumer hardware with no apparent response to the disclosure.

WUPHF is an open-source (MIT), Go-based multi-agent AI office launched via npx wuphf, opening a browser UI at localhost:7891 with agents (CEO, PM, engineers, designer, CMO, CRO) sharing a channel. Each agent has a private notebook; facts are explicitly promoted to a shared wiki backed by a local git repo at ~/.wuphf/wiki/ with typed-fact triplets, LLM-synthesized briefs committed under an archivist identity, BM25-first retrieval (85% recall@20), and a /lint contradiction-detection pass — nothing auto-promotes. The default since v0.0.6 is the markdown backend; nex (API-key), gbrain (OpenAI for full vector search), and none are also available. Fresh per-turn sessions and 97% Claude API prompt cache hits keep input flat at ~87k tokens/turn, costing ~$0.06 for 5 turns, versus orchestrators that balloon from 124k to 484k tokens over 8 turns. Per-agent MCP scoping loads 4 tools in DM vs. 27 in full office; agents wake push-driven with zero idle burn. Providers include Claude Code, Codex, and OpenClaw; action providers are One CLI (local) or Composio (hosted SaaS OAuth); Telegram bridging is built-in.

Comments: The "garbage in, garbage out" problem dominates discussion: bad agent-promoted facts get cited by other agents and compound over time, with commenters recommending human review gates or multi-agent convergence voting before promotion. The probabilistic nature of LLMs is raised as structural — longer agent runs increase failure odds, making short restartable runs preferable. Technical questions target the BM25 routing classifier: whether agent-generated long queries get misrouted to the expensive cited-answer loop, negating BM25's latency advantage. Commenters ask about OpenAI-compatible endpoint support (e.g., DeepSeek) and flag security concerns about GitHub for sensitive business docs, suggesting Cloudflare or GCP. Observers note three LLM wiki systems hit the HN front page in 24 hours, calling for collaboration over duplicate effort. TiddlyWiki's 20-year-old single-file self-modifying wiki is cited as precedent. Several appreciate the markdown-first, git-native, local design, while a vocal minority dismisses the Karpathy name-drop and The Office branding as hype.

A developer used Claude Code with Opus 4.6 to complete an abandoned shim called "Sub-standard" that exposes YouTube Music through the OpenSubsonic API, enabling clients like Feishin and Symfonium to stream it. Setup used a uv/FastAPI project with ytmusicapi and yt-dlp, the OpenSubsonic spec, and a CLAUDE.md conventions file covering type annotations, Pydantic V2, and pytest style. The workflow relied on plan mode, iterative prompting, context clearing after major changes, and feeding server logs to Claude on errors. An MVP covering license, search, streaming, and cover art endpoints was working in one evening. Further work added SQLite for metadata, in-memory ytmusicapi caching to avoid rate limits, and on-disk song caching with cleanup for incomplete files on disconnect. The spec covers ~80 endpoints across 15 categories; undocumented details like stripping .view suffixes required iterative discovery. The author distinguishes learning projects from "wish fulfillment" projects, arguing AI tools suit the latter while warning developers must still pursue independent stretch projects to avoid deskilling.

Comments: Users broadly validate using AI for abandoned projects. One shares an enthusiastic experience using Claude Code with Godot for long-abandoned game dev projects, noting Claude proactively pushed toward a V0 gameplay loop, helped implement procedural generation algorithms from research papers, generated graphic assets via external tools, and assisted with world-building lore—describing it as among the most fun computer experiences in years. Another echoes the "wish fulfillment" framing, noting AI coding is especially useful for features buried in bloated software, citing Zawinski's Law about software inevitably expanding until it can read email. A dissenting voice argues against paying commercial providers like Anthropic or OpenAI, instead advocating local open models on consumer hardware such as Mac Studio or AMD GPUs with tools like OpenCode, noting they are slower but always available, cost nothing ongoing, and allow users to disengage without feeling compelled to supervise them. One comment was flagged and removed, and a moderation note indicates the post title was edited to reduce provocativeness.

Google plans to invest up to $40 billion in Anthropic, committing $10 billion immediately at a $350 billion valuation, with an additional $30 billion contingent on Anthropic hitting performance targets. The deal follows a similar arrangement with Amazon AWS and comes as Anthropic faces compute capacity constraints, having recently signed contracts to purchase multiple gigawatts of next-gen TPU capacity from Google and Broadcom. This creates a "circular" or vendor-financing dynamic where the invested cash effectively returns to Google through cloud compute spend. Google already holds roughly 15% of Anthropic from prior rounds. Analysts view the investment partly as a strategic hedge against OpenAI rather than pure conviction in Anthropic's standalone value, with the structure giving Google a potential path to acquisition. Secondary market valuations for Anthropic have reportedly been 2–3x higher than this deal's implied $350 billion, raising pricing consistency questions. The deal underscores a broader trend of AI industry consolidation into compute-anchored alliances rather than truly independent competing labs.

Comments: Commenters focus on the circular money flow—Google invests in Anthropic, which spends on Google Cloud compute, keeping capital in-house—with one quipping it resembles spouses investing in each other's businesses. Many compare the dynamic to the dot-com bubble or 2008 credit instruments. Several note Google employees themselves prefer Claude over Gemini internally, suggesting the investment hedges against OpenAI more than it reflects Gemini confidence. The $350 billion deal valuation contradicts secondary market trades reportedly at 2–3x that figure, perplexing observers. Many view Anthropic as a universal insurance policy major tech firms hold in case a competitor wins the AI race, with Amazon, Google, and Microsoft all holding stakes while pushing their own products. Practical users report genuine productivity gains from Claude, countering bubble arguments, while skeptics cite cheaper alternatives like DeepSeek. The dominant concern is that AI capital flows are becoming self-reinforcing loops, with critics noting that when the only buyers propping up valuations are the vendors selling compute to those same companies, history suggests poor outcomes.

Dartmouth provost Santiago Schnell invokes John Milton's 1644 argument that education repairs human capacity, warning AI magnifies an old error: confusing linguistic fluency with genuine understanding. Large language models can draft essays and summarize research, but their utility differs from education, which forms persons capable of judgment and responsibility. AI industrializes the mistake Milton criticized—supplying finished language before students have undergone the questioning and revision that make language meaningful. Because no one can learn in another's place, delegating key cognitive acts to machines means those acts simply don't occur. Teachers become more—not less—important as guides who expose confusion and pose the right next question. Institutions should respond with pedagogical redesign: more in-class writing, oral defense of arguments, seminars around live questions, and transparency requirements when AI is used. The author frames AI as a clarification that has exposed pre-existing institutional habits of rewarding performance over understanding, and suggests this may paradoxically catalyze genuine educational renewal.

Comments: Commenters engage the core tension between human and machine cognition, with one questioning how to divide cognitive responsibility and noting LLMs push society toward "maps with no relation to territory." Several draw historical parallels: the industrial revolution shaped education for factory work, while AI may favor hyper-specialized small firms over large corporations, demanding educators teach adaptability rather than single skills. Others note that well-presented material can paradoxically undermine students' ability to wrestle with hard problems—a dynamic already visible at well-funded schools. Practical suggestions include Fermi questions and iterative bounding to build quantitative intuition, and requiring oral defense of AI-assisted design documents to introduce productive friction. One commenter describes pivoting from software development to manual labor as a model of meaningful, community-rooted, non-automatable work. Most pointedly, a commenter flags the irony that the essay itself appears AI-generated—citing a detection tool's high-confidence assessment across multiple sections—directly undercutting the author's argument with its own example.

Stash is an open-source (Apache 2.0) persistent memory layer for AI agents, built on PostgreSQL with pgvector and exposed via the Model Context Protocol. Unlike platform-locked memory in Claude.ai or ChatGPT, it targets any MCP-compatible agent — Claude Desktop, Cursor, or custom setups — with no vendor lock-in. Memory is organized into hierarchical namespaces (e.g., /projects, /self), where reading a parent path automatically includes all children. A background process continuously synthesizes agent experiences — conversations, decisions, failures — into a structured knowledge graph that detects contradictions and tracks goals over time. Stash provides 28 MCP tools spanning recall, causal chains, contradiction resolution, and hypothesis management. Setup requires only Docker Compose; an OpenAI-compatible backend handles both embedding and reasoning, supporting Ollama, OpenRouter, vLLM, and LM Studio. The embedding dimension (STASH_VECTOR_DIM) must be set before first run and cannot be changed afterward without a full database reset, defaulting to openai/text-embedding-3-small at 1536 dimensions.

Comments: Despite ambitious claims, commenters broadly view Stash as pgvector plus MCP with simple store/recall functions — effectively RAG in disguise, with no benchmarks proving superiority over markdown files and grep. Claude.ai's background-synthesis approach (auto-summarizing chat history without explicit writes) is seen as superior to store/recall systems, with one user noting this capability alone keeps them on Claude despite wanting to switch. Memory systems generally are criticized for becoming messy and context-polluting at scale; manual context selection with no persistent memory is argued as the only reliable approach. The PostgreSQL dependency is a dealbreaker for embedded use cases like game agents. Team collaboration is flagged as unsupported — per-user memory goes stale as teammates merge PRs. The rapid build timeline and vague marketing with no performance proof draw criticism for misleading claims. Some users request LLM-use transparency disclosures and benchmarks before trusting such tools. A site bug — cursor:none on the body tag — is noted. The space is seen as crowded, with hundreds of similar memory systems already competing.