Table of Contents

Hacker News

Easyduino is an open-source project that reimplements PCB designs for popular microcontroller dev boards — Arduino UNO, Nano, ESP32, ESP32 S3, Raspberry Pi Pico, and STM32 Bluepill — using the free KiCad EDA tool and adding USB-C support. The project addresses fragmentation caused by original designs built across different eras, countries, and proprietary tools (Eagle, Altium). All boards use a 4-layer copper stackup (JLC04161H-7628) targeting JLCPCB manufacturing constraints. Some original components, like the ATmega16U2, were substituted due to supply shortages circa January 2023, and Pi Pico's 01005 passives were excluded due to assembly cost. Each board includes KiCad project files, schematics, Gerbers, JLCPCB-formatted BOM/CPL files, datasheets, and photos. Compatible with KiCad v8–v10, with v10 Jobsets streamlining Gerber and BOM generation, though multi-project Git integration has known limitations. V1.0 hardware bugs exist in the RP2040 (Flash pin mix-up) and ESP32 S3 (missing RST/SUSPEND pull resistors on CP2102), with v1.1 revisions awaiting testing. Licensed under CERN OHL v2 Permissive, allowing commercial use with license attribution only.

Comments: Users widely praise Easyduino as filling a long-standing gap — having open-source, standardized PCB templates lets designers fork a known-working board, add custom capabilities, and retain a standard footprint, lowering the barrier to custom hardware. One commenter shared a personal Arduino UNO redesign that exposed how routing practices affect switching characteristics, framing these reference designs as valuable learning tools. Others specifically highlight the ESP32 boards as confidence-builders, noting that working around an ESP32 module is conceptually straightforward but that unknown pitfalls make a clean reference design invaluable. The fork-and-template usage model is the dominant framing in the discussion, with one user explicitly asking whether forking is the intended workflow. A parent asked for guidance on introducing PCB design to children aged 9–13, noting mixed results with a 3D printer as a gateway to physical computing. Several users expressed direct intent to incorporate the designs into their own projects, and the overall reception treats the project as a practical, reusable starting point rather than merely an educational reference.

l123 is an open-source terminal spreadsheet that faithfully recreates the Lotus 1-2-3 Release 3.4a for DOS (1993) interaction model using a modern Rust and IronCalc compute stack with native .xlsx round-trip support. It features slash menus, a three-line control panel, 13 modes (READY, LABEL, VALUE, EDIT, POINT, etc.), keyboard-first workflows, and a WYSIWYG icon panel with mouse support across 17 icons. Formulas use original 1-2-3 syntax (@SUM, .., #AND#), translated internally to Excel syntax before reaching IronCalc. Eight milestones are complete: grid/nav, control panel, formula engine, menu system, .xlsx/CSV I/O, 3D sheets and undo, printing/range search, graphing (7 chart types, SVG/PNG), and the WYSIWYG panel; macros and final polish remain. The architecture is a strict layered Rust workspace keeping the UI engine-agnostic. Its two core promises: an experienced 1-2-3 user can drive it cold without documentation, and files round-trip cleanly to/from .xlsx. Testing relies on keystroke transcript acceptance tests. It explicitly is not a DOS emulator, visual homage, or macro player for .WK3 files.

Comments: Users highlight sc-im as the most prominent existing terminal spreadsheet with a long HN history, and also point to newer alternatives sheetsui and cell as similar projects worth exploring. VisiData is enthusiastically recommended as a highly capable terminal tool for data wrangling and exploration. One commenter reflects nostalgically on early career memories of Lotus Notes administration — a distinct product from Lotus 1-2-3 — appreciating its admin UI capabilities. Another commenter raises the adjacent question of terminal-compatible word processing, wondering if a WordPerfect-style DOS-era alternative exists and whether Vim could serve as a document editor for printing workflows.

Project Herakles, a three-year University of Cádiz investigation in the Bay of Algeciras, identified 151 archaeological sites including 134 shipwrecks ranging from a 5th-century BC Punic vessel to WWII-era remains, with 34 documented so far. Phoenician, Roman, medieval Islamic, Spanish, British, Venetian, and Dutch ships reflect the strait's role as a global maritime bottleneck for trade, exploration, and war. Notable finds include 23 Roman vessels, medieval ships potentially linked to late Islamic Spain, and the Puente Mayorga IV—an 18th-century Spanish gunboat that disguised itself as a fishing boat before attacking British warships with prow-mounted cannons. A mysterious book-shaped wooden box found aboard proved anticlimactic: it contained wooden combs rather than spy documents. Sites face growing threats from port dredging, construction, rising sea levels altering sediment layers, and invasive algae. Researchers are producing VR models and 360-degree videos to build public awareness and push Andalucían regional authorities toward formal site protection.

Comments: The comments raise a pointed methodological question: if the Bay of Algeciras was historically a high-traffic chokepoint, why has systematic archaeological survey only recently occurred? Commenters speculate that climate change may have converted a known but deferred research opportunity into an urgent one, noting the article's references to rising sea levels exposing and destabilizing sites and invasive algae overgrowing wrecks. The implication is that urgency, rather than discovery, may be the true catalyst—that researchers likely knew wrecks were there but were compelled to act now by accelerating environmental threats. The article doesn't directly address why the survey is only happening now, leaving this a credible but unconfirmed interpretation.

China's National Development and Reform Commission ordered Meta to unwind its $2 billion acquisition of Manus, a Singapore-based AI startup founded in China, citing export control and technology transfer laws. Manus develops general-purpose AI agents for tasks like coding and market research, launched its flagship product in March 2025, hit $100M ARR in eight months, and raised $75M from Benchmark before closing China offices and relocating to Singapore. Meta announced the deal late last year to integrate Manus into its AI products and insists it "complied fully with applicable law." China's Ministry of Commerce opened an investigation in January, and co-founders CEO Xiao Hong and chief scientist Ji Yichao were summoned to Beijing and barred from leaving. The move targets the "Singapore-washing" strategy — relocating from China to Singapore to sidestep scrutiny from both Beijing and Washington — and the acquisition is reportedly already complete, making any unwind legally uncertain.

Comments: Users frame China's action as its version of U.S. export controls, asserting authority over technology with domestic origins regardless of corporate domicile, and note Beijing had signaled through Article 12 of its export control law that it would challenge the deal before it closed — Manus proceeded anyway. The co-founders' detention in China is seen as Beijing's primary enforcement lever, with observers questioning what tools China has against a Singapore-domiciled firm whose acquirer has no China operations. The "Singapore-washing" loophole is now viewed as effectively broken, alarming Chinese founders and VCs who relied on it. Manus already displays "by Meta" branding, underscoring how far integration had progressed, and legally unwinding a completed acquisition is seen as murky. Some warn this is an early skirmish in a longer AI export-control escalation, with parallels drawn to semiconductor supply-chain talent obfuscation already underway.

Apple is warning enterprise users about two potential breaking changes in macOS 27, due September 2026. First, AFP support is expected to be dropped—telegraphed since OS X 10.9 Mavericks 12 years ago—which would block users with Apple silicon Macs from upgrading unless they replace Time Capsules or AFP-only NAS devices. Second, Apple may require connections to MDM, DDM, Automated Device Enrollment, app distribution, and software update servers to use TLS 1.2 or higher, ATS-compliant ciphersuites, and valid ATS certificates. Auditing compliance requires installing a special logging profile and running a complex terminal predicate command, as no GUI tool exists for this. Apple says the TLS change could come "as early as the next major software release" but has left room to delay if enterprise impact proves severe. Both changes are non-retroactive, so Macs not upgraded to macOS 27 remain unaffected, but Apple silicon users have no fallback option for AFP storage. Local Content Caching servers are explicitly exempt from the TLS requirements.

Comments: Users note AFP deprecation will hit Time Capsule owners hardest—Apple's last unit shipped in 2013, unsupported since 2018—with Ubiquiti's UNAS-2 cited as a solid SMB-based Time Machine replacement for UniFi users. The TLS 1.2 floor is considered long overdue since TLS 1.1 was deprecated roughly five years ago. Some question whether Time Capsules would still work via wired direct connection even after AFP removal. Critics argue macOS SMB support remains slow and buggy 12 years after becoming the primary protocol, suggesting low internal priority. Others contend Apple should dedicate even one developer to aging-but-used features rather than abandoning them. Skepticism surfaces over the "coming in macOS 27" framing, since nothing is formally confirmed. Users recall Apple's 2015 networking code rewrite that caused widespread breakage and had to be reverted. Speculation arises that Apple may eventually push Time Machine to iCloud as a services revenue play. Time Machine itself is noted as visually broken, with deteriorating animations indicating bare-minimum maintenance.

The two original ZSNES developers have reunited to build SUPER ZSNES, a completely rewritten SNES emulator with a GPU-powered PPU core enabling high-resolution Mode 7 and per-game visual upgrades. It delivers more accurate CPU and audio cores than the original, alongside a modernized version of the classic snow-falling UI. The standout feature is the Super Enhancement Engine, currently supporting 7 games with options for manually redrawn high-resolution graphics, texture/normal mapping, overclocking slowdown-heavy titles, widescreen where internally supported, uncompressed audio sample replacement, and 3D height-mapped Mode 7. All enhancements can be toggled individually, and enhancement data contains no ROM or copyrighted content. Standard features include save states, rewind, fast forward, cheat codes, and auto save history, with netplay and special chip support (DSP1, SuperFX) planned. The project touts "No Vibe Coding, Classic development style." This is an early build: special chip emulation is absent, optimization is incomplete, and emulation bugs are present.

Comments: Users express nostalgia for ZSNES as a childhood staple from the late 1990s, welcoming the reunion of its original developers. Technical discussion centers on the GPU PPU implementation, with some noting it appears to render tile-by-tile and line-by-line for Mode 7 rather than capturing full register state per pixel — a tradeoff enabling visual enhancements but sacrificing some accuracy. The Unity engine choice draws skepticism over its necessity for SNES emulation and raises malware concerns. The closed-source nature and the unlisted game roster for the enhancement engine are cited as gaps. The "No Vibe Coding" declaration prompts curiosity about whether AI was entirely avoided even for code review. Users praise per-game enhancements over generic HD filters and appreciate per-feature toggles. Uncompressed audio replacement sparks discussion about using ML to automate high-quality sample replacement at scale, referencing community work tracking original SNES composer samples like Nobuo Uematsu's Final Fantasy tracks. MVG's video is cited as better showcasing the project than its homepage.

The author, experiencing chronic brain fog cycles driven by poor sleep, excessive caffeine, and media-fueled dopamine loops, discovered that staring at a blank wall for 5–10 minutes effectively restored focus during midday slumps. Inspired by a video by Simple Lucas, the technique combines defocused peripheral vision to activate the parasympathetic nervous system with "mind blanking." A 2012 paper estimated 34 GB of daily information exposure per person in 2008, extrapolating to roughly 87 GB today weighted by media quality. The brain fog cycle involves poor sleep leading to heavy caffeine use, enabling continued media consumption, which disrupts sleep further — a self-reinforcing loop. Small, repeated dopamine hits from scrolling create a tolerance hole requiring ever-stronger stimulation. Wall-staring proved harder than expected, requiring sustained effort similar to working out, but yielded significant focus recovery within a single session. The method is positioned as a zero-cost, screen-free intervention requiring no equipment, usable anywhere regardless of circumstance.

Comments: Many commenters identify the practice as a reinvention of meditation, citing Zen (Bodhidharma's 9-year wall-gazing), Shamatha/Zhine traditions, and likely activation of the default mode network. Several note smartphones steal not just attention but "disattention" — the unstructured mental wandering that allows the mind to recover and generate insight. A common alternative suggestion is walking without headphones, achieving the same reset with added physical benefits. A Zhine practitioner clarifies the goal is gentle, repeated redirection of drifting attention — not forced concentration — gradually quieting inner anxiety. Others push back on the productivity framing, arguing the real value is cultivating inner mental resources rather than maximizing output. Napping, nature walks, zone-2 cycling, and guitar are offered as equivalents. One commenter distinguishes wall-staring from meditation entirely, calling it simply "rest" — a concept modern culture has largely lost. The accessibility of wall-staring over nature walks is defended for those without nearby green space. Several note caffeine dependency and sleep disruption as the more fundamental root problem.

Scratch has suffered a cascading series of SVG security vulnerabilities since 2019, each fix exposing new attack surfaces. Early patches used regex and DOMPurify, but were bypassed via case sensitivity and inline event handlers. HTTP leaks emerged through image href attributes, CSS @import, and CSS url() — each requiring more parser complexity. A 2024 XSS found Paper.js receiving unsanitized SVGs, while 2026 brought three new url() bypasses exploiting CSS escape codes, multiple url() per attribute, and CSS variables. A full-page restyling attack via long CSS transitions remains unfixed, as does an image-set() HTTP leak disclosed after six months without a fix. Future CSS specs (src(), image()) will open more vectors once browsers implement them. Claude Opus 4.6 independently found the image-set() issue and a css-tree relaxed nesting bypass — where relaxed CSS nesting syntax is parsed as raw text, escaping sanitization entirely. TurboWarp instead sandboxes SVGs in an iframe with strict CSP, blocking all external requests and style bleed-out without ever-growing sanitization code.

Comments: Users broadly agree CSP is the only credible long-term fix, noting that CSP rules set via HTML meta tags in srcdoc iframes are locked on load before JavaScript can subvert them. Some suggest restricting SVG to a safe subset — simple paths and fills — covering most real-world files, though the HTML Sanitizer API's default config already blocks inline styles, making it unhelpful for CSS sanitization. Users call for a hypothetical sandbox attribute on inline SVG analogous to iframe sandbox, though browser adoption would take years. Google Slides lacking SVG support despite a 15-year-old feature request is attributed to these exact challenges. TinyVG is raised as a simpler vector alternative but lacks animation and broad adoption. Several users note SVG is a markup language, not an image format, and treating it otherwise is the root misunderstanding. The vulnerabilities are characterized as well-known HTML XSS vectors, not SVG-specific novelties. Claude Opus 4.6 is noted to have actively contributed by independently discovering sanitization holes — an unusual acknowledgment of AI-assisted security research within the post itself.

World (formerly Worldcoin until October 2024), co-founded by Sam Altman, scans irises via spherical orbs and assigns a "proof of humanity." On April 17, 2026, strategic partnerships with Tinder, Zoom, and DocuSign were announced to verify users and reduce deepfakes and fraud. The company operates 7,000 orbs across six U.S. cities and claims 18 million verifications in 160 countries. Nine governments—including Kenya, Brazil, Spain, Portugal, Hong Kong, Germany, Indonesia, the Philippines, and Thailand—have halted or banned operations over privacy violations and lack of informed consent. A 2022 MIT Technology Review investigation found deceptive onboarding and undisclosed collection of vital signs beyond iris data. Brazil specifically outlawed the company's practice of paying users $50 in cryptocurrency for iris scans, ruling it violated consent law. Edward Snowden condemned it for "cataloguing eyeballs." World published citizen-support studies from Portugal, Spain, and South Korea and released a revenue blueprint for companies adopting its digital ID, while looser U.S. biometric regulations make American expansion easier than in the EU.

Comments: Technical critics argue iris images can be fabricated client-side by AI and shared, making the system simultaneously invasive and ineffective without offline verification or anti-cloning measures. Beyond Tinder, Zoom, and DocuSign, the April 17 announcement slide reportedly included Okta, Vercel, Shopify, AWS, Coinbase, and others. Many note the irony of an AI company whose products enable bots now selling human-verification as a solution. Privacy concerns center on the absence of uniform U.S. biometric law, the risk of centralized ID authorities minting fraudulent identities via sybil attacks, and lack of civil or criminal liability for data misuse. Some see value in bot reduction on platforms like Tinder, while others call for open-source alternatives before big tech or governments impose worse systems. Critics frame the rollout as surveillance overreach, and several predict personal identification tracking will become universal within a decade given the economic incentives for data-hoarding companies.

GitHub is transitioning all Copilot plans to usage-based billing on June 1, 2026, replacing premium request units with GitHub AI Credits consumed based on token usage. Base prices stay the same ($10/mo Pro, $39/mo Pro+, $19/user/mo Business, $39/user/mo Enterprise), with each plan's monthly price equaling its included credit allotment. Code completions remain free of credit consumption, but fallback to lower-cost models when credits run out is eliminated. Annual plan subscribers face steep model multiplier increases (Opus 4.6/4.7 jumping from 3x to 27x, GPT-5.4 from 1x to 6x) starting June 1, before transitioning to monthly billing at expiration. Business and Enterprise customers receive promotional credit boosts for June-August ($30 and $70 respectively), plus new pooled usage and admin budget controls. The shift is driven by agentic coding sessions consuming far more compute than simple chat, making per-request flat pricing financially unsustainable. A preview billing dashboard launches in early May to help users prepare before the transition.

Comments: Commenters widely view this as the end of subsidized AI inference, noting flat-rate plans allowed users to consume hundreds of dollars in tokens monthly for $10-39. Model multiplier increases draw sharp criticism, with annual subscribers effectively facing a 9x reduction in usage (Opus jumping from 3x to 27x). Many plan to cancel in favor of OpenRouter, direct API access, or local LLMs via Ollama, arguing GitHub AI Credits offer no discount over direct API pricing, eliminating the incentive to stay. Annual subscribers feel betrayed, with some in Germany and Australia considering legal action under consumer protection laws. VS Code integration depth remains the main value argument, sparking debate about alternatives like Continue, Cline, and Claude Code. Some note that per-request model abuse—agents forced into infinite loops to extend sessions—partly drove the change. Enterprise users flag that code review now consumes both AI Credits and GitHub Actions minutes, further eroding value. Users already running local models express vindication, while others debate whether open-source models on hardware like M3 Max with 64GB RAM can now replace cloud subscriptions entirely.

Mail-order occultism flourished in the early 20th-century US, enabled by linotype printing, cheap pulp paper, and expanding postal networks. Weber's "disenchantment" thesis proved wrong: modernity redistributed spiritual practice privately rather than eliminating it. Pioneer Sydney Flower ran multiple fictitious imprints from Chicago, was arrested for postal fraud, yet edited a magazine from prison. AMORC, founded in 1915 by Harvey Spencer Lewis, claimed an initiated Rosicrucian lineage and still operates today, as does Paul Foster Case's Builders of the Adytum (BOTA), teaching Kabbalah and tarot as rigorous self-development. Maurice Doreal's Brotherhood of the White Temple blended Gnostic mysticism with pulp sci-fi, seeding later conspiracy thinking. Many courses, including the Mystic Brotherhood University's, anticipated cognitive-behavioral therapy decades before CBT was codified. Psychiana charged roughly $1 per weekly lesson in the 1930s. These courses ultimately commodified initiation, substituting therapeutic self-management for genuine transcendence under an American ideology of self-reliance.

Comments: Commenters connect the article's themes to contemporary fringe culture, with one pointing to the 1988 publication "High Weirdness By Mail," a physical directory of subscription-based fringe services — including neo-pagan and occult publications charging as little as 75 cents per issue — available via the Internet Archive. Another offers a wry joke invoking Bayes' Theorem as a modern stand-in for occult-style self-help, lightly satirizing the article's argument that esoteric systems migrated into rationalist frameworks. A third shares a personal anecdote about receiving BOTA correspondence materials in Toronto's Little Portugal neighborhood, noting that neighbors repeatedly opened the bulky sealed envelopes from the communal mailbox — technically an indictable offense in Canada — underscoring the enduring mystique and curiosity these materials still provoke decades after the mail-order occult peak.

A new CX team was created without consulting the Engineering Manager responsible for the related product tribe, reporting directly to a separate product vertical leader rather than tribe leadership. The team planned a centralized microfrontend React dashboard, but the EM viewed this as misaligned with the tribe's shift toward E2E product ownership per team. Rather than integrate, the EM directed each product team to build simpler Spring Boot template dashboards, deliberately sidelining the CX team and accepting duplicated effort as a strategic tradeoff. Adoption stalled when the product manager absorbed all support work herself; pairing sessions and Jira-aligned workflows eventually enabled CX staff to resolve tickets autonomously. Within a month, resolution dropped from days to hours, meeting the OKR. After five months of no progress from the dedicated CX team, leadership disbanded it. The author frames bypassing the CX team as the least-wasteful path, citing Team Topologies' principle against forced platform adoption.

Comments: Commenters broadly frame the author's behavior as office politics rather than principled engineering leadership. The CX team did not report to the EM, and critics note he never attempted to collaborate with the other product leader managing it—instead systematically ensuring the team had no adoption path within his sphere until it dissolved. Several users flag "mysterious new team nobody told me about" as a red flag, suggesting the team may have been deliberately created outside his control for good reason. Others are puzzled by hiring three full-time employees to build what the author himself calls a simple dashboard requiring basic API calls and a web interface. One commenter sympathizes with people hired into the CX roles who got caught in organizational crossfire. The "tribe" terminology from Spotify's squad model confuses some readers. One draws a parallel to a similar cross-functional team shot down at their own company, prompting reflection on organizational patterns. The prevailing read is that Team Topologies vocabulary was used to justify protecting organizational turf.

pgBackRest, a PostgreSQL backup and restore tool developed over 13 years by David Steele, is being discontinued after the loss of corporate sponsorship following Crunchy Data's acquisition. Steele was unable to secure a new position or sponsors to sustain the project and chose a clean break over sporadic, poor-quality maintenance. Version 2.58.0 is the final stable release; Steele requests forks use a new name. The tool was feature-rich, supporting parallel compression (lz4, zstd), remote backup via TLS/SSH, S3/Azure/GCS object storage, full/differential/incremental and block-level backups, WAL archiving with async push/get, page checksum validation, delta restores, and repository encryption across ten PostgreSQL versions. Supabase was its sole remaining sponsor. The repository will be archived, and any future fork must independently rebuild user trust from scratch.

Comments: Users express widespread disappointment, with many running pgBackRest on multi-terabyte production databases and calling it the premier PostgreSQL backup solution — noting alternatives like WAL-G, Barman, databasus, and pgbackweb exist but fall short on feature depth and documentation. The Crunchy Data M&A event is identified as the root structural cause, illustrating how critical infrastructure can be silently defunded by upstream corporate decisions users have no visibility into. The broader OSS sustainability problem draws sharp commentary: billion-dollar companies built on PostgreSQL failed to fund one developer's salary, and most users consumed the tool for free without donating. Some criticize the decision to archive the repo and prohibit name reuse rather than transferring stewardship to a qualified successor, calling it analogous to salting a community garden. There is self-reflection about passive reliance on free tools, calls for better funding-need visibility in OSS projects, and concern that economic pressure on developers will accelerate abandonments industry-wide over the next two years.

DSPi converts a Raspberry Pi Pico (RP2040) or Pico 2 (RP2350) into a plug-and-play USB audio interface with a full DSP pipeline, compatible with macOS, Windows, Linux, and iOS without drivers. It accepts 16/24-bit PCM stereo at 44.1–96 kHz and outputs via S/PDIF or I2S — 8 channels on RP2350, 4 on RP2040 — plus a PDM subwoofer. The DSP chain covers per-channel preamp, 10-band parametric EQ, RMS volume leveller with lookahead, BS2B headphone crossfeed with interaural time delay, ISO 226:2003 loudness compensation, a matrix mixer, and per-output delay up to 85 ms. The RP2040 runs at 307.2 MHz overclocked using Q28 fixed-point with hand-optimized ARM assembly; the RP2350 uses hardware FPU with a hybrid SVF/biquad architecture for superior low-frequency accuracy. Core 0 handles USB and primary DSP; Core 1 runs either a 2nd-order delta-sigma PDM modulator or parallel EQ for additional outputs — mutually exclusive modes. A 10-slot preset system, runtime GPIO reassignment, OTA firmware updates, and diagnostics for peak metering, USB PHY errors, and CPU load complete the feature set.

Comments: Users highlight CamillaDSP with a calibrated UMIK-1 microphone as a capable alternative for room correction on a Raspberry Pi 3B at roughly 20% CPU, with EasyEffects as a real-time desktop option. The Teensy 4 and its Audio Library are noted as another microcontroller DSP platform. Several observers flag that DSPi is output-only with a single USB stereo input pair, limiting multi-channel input use cases. The creator acknowledged using Claude Opus 4.5 as an AI assistant for development busy-work. Hardware accessibility is raised as a barrier, with requests for tutorials since the Pico lacks built-in analog outputs. Open questions include 192 kHz support, guaranteed processing latency, analog I/O options, and whether the RP2040/RP2350's 264/520 kB RAM is sufficient for a stereo reverb. Users also ask about guitar effects processing, using the device in reverse for cheap multi-channel USB inputs, and whether it could replace a basic USB DAC. One commenter proposes minor ARM assembly micro-optimizations in hot DSP loops, noting dead-register redundancies.

Google's Decoupled DiLoCo splits large AI training runs across decoupled compute islands called learner units with asynchronous data flow, isolating disruptions so other units keep learning. It builds on Pathways (async distributed AI) and DiLoCo (low inter-datacenter bandwidth), adding self-healing: tested via chaos engineering, it survived loss of entire learner units and reintegrated them on recovery. Google trained a 12B-parameter model across four U.S. regions over 2-5 Gbps of standard WAN — no custom inter-facility infrastructure needed. The system ran 20x faster than conventional sync methods by folding communication into computation windows, eliminating blocking bottlenecks. It also supports mixing hardware generations like TPU v6e and TPU v5p in one run, matching single-type ML performance, extending hardware lifespan and easing capacity bottlenecks as newer chips roll out unevenly.

Comments: Commenters respond with two distinct angles. One flags potential national security implications, noting that training frontier AI across distributed, internet-connected data centers raises concerns about access and oversight, though no specific threat model is elaborated. The other questions the degree of novelty, acknowledging the significant engineering effort in adapting the approach for AI training, but observing that distributing workloads across geographically separated clusters and combining results is a well-established pattern in non-AI distributed computing, implying the core innovation may be incremental rather than foundational.

A 2019 Virginia bank robbery spawned a Supreme Court case on geofence warrants, which compel tech companies to produce location data from all cellphones near crime scenes. Police forced Google to search 500 million accounts to identify devices near the bank for 30 minutes before and after the robbery, leading to Okello Chatrie's conviction. Oral arguments scrambled ideological lines: conservative Gorsuch and liberal Sotomayor both challenged the government on whether warrantless location access would extend to emails and cloud documents. The government argued people "advertise" their location publicly, making it unprotected; critics countered that password-protected cloud data is private property. Justices noted the government conceded it could search location records of abortion clinic visitors or political event attendees without a warrant. Google now stores location data on-device and no longer responds to geofence warrants, though roughly 30 other providers still retain similar data centrally.

Comments: Users following oral arguments noted a justice suggested people could opt out of location tracking to sidestep privacy concerns, while others observed that justices and petitioner agreed any cloud-stored data — emails, photos, documents — would be accessible without a warrant under the government's theory. The government conceded it could run such searches on anyone, including abortion clinic visitors or political attendees, drawing sharp judicial pushback. Commenters noted Kavanaugh appeared decided before arguments, introducing exigency hypotheticals into a case where the warrant was issued a week after the crime. Google's shift to on-device storage is viewed positively, but roughly 30 other providers still retain similar data centrally, and carriers may hold records too. Parallel construction — using secretly obtained data to guide visible investigations — is raised as making any ruling partially academic. A minority view argues investigators should have tightly guarded access to location data, citing cases where its absence left crimes unsolved. Some compare geofence warrants to proliferating license-plate reader networks, which face far less legal scrutiny.

FDA approved Otarmeni (lunsotogene parvec-cwha), the first dual AAV vector-based gene therapy, for patients with severe-to-profound sensorineural hearing loss caused by biallelic OTOF gene mutations. The OTOF gene encodes otoferlin, essential for sound signal transmission; variants account for 2–8% of inherited non-syndromic deafness. Otarmeni is a one-time biologic-device combination surgically injected into the cochlea, delivering a functional OTOF gene copy to restore otoferlin. In a 24-patient pediatric trial, 80% of 20 evaluable patients experienced improved hearing — unexpected without treatment. FDA approved it just 61 days after BLA filing, tied for the fastest in modern history, as the first gene therapy under the Commissioner's National Priority Voucher program. The therapy requires preserved outer hair cell function and no prior cochlear implant in the treated ear. Common side effects include middle ear infection, nausea, dizziness, and procedural pain. Approved to Regeneron Pharmaceuticals, continued approval hinges on long-term durability and speech/quality-of-life outcome data.

Comments: Community responses blend personal testimony, scientific caution, and broader hope. Several users with hearing loss note this treatment won't help their specific conditions — mumps-induced nerve deafness, noise-induced loss, or non-OTOF variants — but welcome it as progress. One user with a GJB2 family mutation describes using preimplantation genetic testing to avoid passing it on, and hopes related Decibel Therapeutics pipeline therapies (now under Regeneron) reach market in time. Scientific concerns are raised about otoferlin's expression in brain and bone marrow, risk of off-target AAV spread, and need for ten-year outcomes before celebration. Questions arise about pricing and whether the therapy could help adults with non-genetic SNHL. A quoted disability scholar argues such therapies risk framing deafness purely as a medical problem rather than a difference to accommodate. Others credit the accelerated approval to improved FDA processes and current health leadership.

OpenAI published a principles document outlining five core values: Democratization (resisting AI power consolidation), Empowerment (AGI enabling everyone to achieve their goals), Universal Prosperity (widespread flourishing requiring new economic models), Resilience (managing AGI-introduced risks), and Adaptability (updating positions as circumstances change). The document states that power should be held decentrally rather than by a handful of companies, and envisions AI delivering broad societal benefits like curing diseases and expanding access to education and healthcare. It also references a prior charter commitment that if a safety-conscious competitor nears AGI first, OpenAI would stop competing and assist them. The release came amid controversy over OpenAI's military contract and renewed public scrutiny of Sam Altman, timed just before the Musk v. Altman trial, prompting widespread skepticism about the gap between stated values and the company's actual conduct.

Comments: Critics overwhelmingly view the principles as hollow, pointing to the direct contradiction between the stated democratization principle and OpenAI's practice of closing its models. Many doubt OpenAI would ever actually cede ground to a safer competitor as its charter once pledged. Commenters note glaring omissions around autonomous weapons, mass surveillance, and AI warfare. Some offer sardonic "translations" of each principle—democratization as centralization, empowerment as compliance, adaptability as revisionism—arguing the document is corporate doublespeak. Others express concern that AI-driven prosperity will concentrate wealth among a handful of deca-billionaires while displacing millions of workers, and that resource constraints limit any true universal prosperity. The document's release shortly after the military contract signing and a widely-read piece on Altman's credibility is seen as a reputation-repair maneuver. Several commenters express broad cynicism about tech companies publishing values documents, drawing comparisons to Google's abandoned "don't be evil" motto and questioning whether any strongly held principles exist at OpenAI beyond profit.

Workers on the Magic: The Gathering Arena team at Wizards of the Coast have formed "United Wizards of the Coast - CWA," with a supermajority signing union cards before informing company leadership and calling for voluntary recognition. The union aligns with the Communications Workers of America, which has been active in game industry organizing. Key grievances detailed in the workers' open letter include mounting pressure from leadership to adopt LLMs and generative AI tools despite explicit employee objections, Hasbro's policy of claiming ownership over creative work employees produce in their own time using personal resources, and lack of layoff and remote work protections. Workers frame the effort as a step forward not just for their team, but for the broader gaming industry, aiming to demonstrate that a game studio can collectively advocate for better conditions while continuing to build a successful product.

Comments: Reactions are sharply divided. Supportive voices — including a CWA member and software developer who has helped organize game workers — encourage questions about the union and push back on ideological dismissals. Others express skepticism that unions can take hold in gaming given the industry's competitive structure, exposure to offshoring, and relatively replaceable labor compared to sectors like rail or aviation. The workers' letter clause prohibiting Hasbro from claiming ownership of personal creative projects — including open-source work done on personal hardware — drew particular attention as outlandish but apparently common in creative-field contracts. Several users draw parallels to Rockstar's alleged illegal firing of GTA 6 developers organizing a union. Hostile commenters characterize the union as "cancer" and call for terminating organizers, while skeptical customers worry about Arena's long-term viability. A recurring thread ties the union's AI grievances to a years-old WotC claim that Arena used "sophisticated machine learning" to interpret card rules — widely dismissed at the time but now plausible, making resistance to mandatory AI adoption a pointed issue.

The project builds a 3x3 grid flipdisc wall display using 9 AlfaZeta panels (84x42 discs total) powered by a 24V 10A Meanwell supply and framed with 80/20 aluminum extrusions. Communication runs over RS485 using 3 USB adapters; each frame encodes a start byte, flush or buffer command, board address, RLE-compressed binary image data, and an end byte. An Nvidia Orin Nano handles ML processing alongside an IMX708 camera and Waveshare audio board, while the open-source Node.js "flipdisc" npm library supports AlfaZeta and Hanover boards over USB or Ethernet. Rendering stacks PIXI.js, Three.js, Matter.js, and GSAP; MediaPipe gesture recognition runs Python scripts via ZeroMQ IPC. Scenes—NYT RSS, Spotify, and weather—are managed via REST API and WebSocket, controllable through an Expo mobile app supporting queue management, scene config, and freehand drawing. Design uses 3x5 bitmap fonts, Floyd-Steinberg dithering for images, and Bayer 4x4 ordered dithering for UI. Panels alone cost roughly $5,000, remain hard to source, and are primarily sold to the transportation industry.

Comments: Users reference artist BREAKFAST's kinetic flipdisc artworks and large installations at Heathrow Terminal 5 and Seattle's Climate Pledge Arena as notable real-world examples. Several commenters express nostalgia for flipdisc transit boards, lamenting LED replacements. Cost is a recurring concern, with nine AlfaZeta panels estimated at roughly $5,000 before other hardware. Technical skepticism arises around the claimed 25–60fps, with users doubting 60fps is achievable. Suggestions include Conway's Game of Life, sorting algorithm demos, and running DOOM. Users point to LAWO flipdot panels on eBay as a cheaper alternative and recommend Mikeselectricstuff's 26-minute YouTube deep dive. Atkinson dithering is proposed alongside Floyd-Steinberg and Bayer. One commenter asks about the practical benefit of wire ferrules over bare twisted strands. Others wish the project leaned toward generative visuals rather than app widgets, and at least one questions whether the ambient clicking would grow irritating over a workday.

Quarkdown is an open-source Markdown extension tool started as a university research project that adds scripting, reusable functions, and multiple document output formats—paged documents, plain text, documentation sites, and interactive slides—to standard Markdown syntax. The project lead developed it over two years, and its landing page demonstrates capabilities via a demo document about supermassive black hole 1ES 1927+654, which had its corona disappear and reassemble in 2018 in a first-of-its-kind astronomical event. Quarkdown offers blazing-fast compilation with live preview and allows authors to define reusable functions to avoid repetition across documents. It positions itself as a single tool replacing specialized systems like LaTeX, Pandoc, and presentation frameworks. The broader ecosystem it competes in includes Typst, MyST, Quarto, and Pandoc, none of which has yet claimed the title of definitive LaTeX successor. While Quarkdown extends rather than replaces Markdown, its additional syntax—such as .abstract, .doctype, and custom function blocks—pushes well beyond Markdown's original minimalist design philosophy.

Comments: Community debate centers on whether Quarkdown's extended syntax undermines Markdown's core value of simplicity and raw readability. Several users favor Typst as a modern LaTeX replacement, with one reporting a successful book-project switch, and raising a technical concern that Quarkdown may lack iterative layout evaluation (analogous to Typst's "context" mechanism) needed to resolve cross-document layout dependencies. The project lead participated directly in the thread. Competing tools mentioned include MyST, Quarto, Pandoc, Asciidoc, and SDocs—the latter stores Markdown in compressed base64 URL fragments processed entirely client-side. Some advocate for Emacs Org Mode as the most comprehensive plain-text authoring environment. Others note Markdown's lack of a formal specification as both strength and weakness, comparing Quarkdown's trajectory to HTML's evolution from sparse to complex. A minority argues Markdown is the wrong foundation entirely, while others see LLM-driven Markdown proliferation as simultaneously validating and fragmenting the ecosystem, echoing the xkcd "standards" problem.

The standard GPU metric from nvidia-smi and cloud monitoring tools only detects GPU activity, not compute efficiency — real throughput can be 1% while dashboards read 100%. Systalyze open-sources Utilyze, a free tool measuring true GPU efficiency via hardware performance counters with near-zero overhead. Utilyze reports Compute SOL % (achieved FLOPs ÷ peak FLOPs) and Memory SOL %, validated within 2% of ground truth; even DCGM's SM Active reads 99% on a workload with only 6% actual throughput. It also estimates Attainable SOL %, the realistic ceiling for a given model and hardware, distinguishing tunable headroom from physics-imposed limits. In practice, Llama-3.1-8B prefill runs at 45% actual vs 89% attainable, LoRA fine-tuning at 1–7%, and MoE training at 3–15%, while standard tools read ~100% throughout. Systalyze's optimization platform, guided by Utilyze measurements, achieved 40% more token throughput on LLM inference and 6–8× compute gains in LoRA fine-tuning. AMD support is planned; the tool currently targets NVIDIA hardware only.

Comments: Comments cover several practical topics. Users running local setups (dual RTX3090s with llama.cpp) ask whether Utilyze can help with GPU load balancing across mixed workloads like LLMs and video generation, noting nvidia-smi is useful for VRAM but lacks deeper insight. Those running H100 clusters with vLLM express interest in real efficiency data, asking how Attainable SOL % is calibrated when users change hyperparameters. Some suggest power consumption as a simpler indirect GPU load proxy, while others note nsight systems for offline profiling. Practical feedback includes requests to add memory usage, temperature, and fan speed to make Utilyze a full nvidia-smi drop-in replacement. Users also ask about nvtop's own efficiency features, Jetson/Orin support, and uninstall/cleanup procedures (particularly CAP_SYS_ADMIN capability reversal). One user flags an inconsistency: rocm-smi is cited in the post but AMD GPUs are not actually supported yet.