Table of Contents

Hacker News

Socket researchers discovered Bitwarden CLI's npm package (@bitwarden/cli 2026.4.0) was compromised via a GitHub Actions supply chain attack, part of the broader Checkmarx campaign. The malicious payload in bw1.js shares infrastructure with mcpAddon.js, using the same C2 endpoint (audit.checkmarx[.]cx/v1/telemetry), obfuscation scheme, gzip+base64 structure, and Python memory-scraper targeting GitHub Actions Runner.Worker. The malware harvests GitHub tokens, AWS/Azure/GCP credentials, npm tokens, SSH keys, and Claude/MCP config files, exfiltrating through GitHub API commits and npm republishing under Dune-themed repository names. Unique features include a Russian locale kill switch, shell profile persistence via ~/.bashrc and ~/.zshrc, and ideological "Butlerian Jihad" branding suggesting a different operator or splinter group using shared infrastructure. Only 334 users downloaded the malicious version; the Chrome extension, MCP server, and other distributions were unaffected. Organizations should rotate all credentials, audit GitHub for Dune-themed unauthorized repositories, check for /tmp/tmp.987654321.lock, and review CI/CD workflows for injected actions.

Comments: Commenters highlight that setting a minimum release age in package managers (min-release-age=7 in npm 11.10+, equivalent in pnpm, bun, and uv) would have protected the 334 users who downloaded the malicious version. Rust-based rbw and tools like pass/gopass synced via private git are suggested as safer CLI alternatives, with KeePass users noting they've avoided several recent breaches by staying local. The Russian locale kill switch drew criticism as simultaneously brazen and evasive. Raycast users were reassured their bundled Bitwarden CLI version (2026-03-01) predates the compromise. Some noted the CLI doesn't auto-update, limiting damage compared to the 2022 LastPass breach where encrypted vaults were exfiltrated. Skepticism toward JavaScript-based CLI tools runs throughout, with calls for memory-safe languages, stricter package practices, and legislation mandating secure software construction. Users broadly recommend disabling auto-updates, pinning versions, and favoring self-hosted or offline alternatives.

OpenAI has released GPT-5.5, rolling it out gradually in ChatGPT and Codex starting with Pro and Enterprise accounts before reaching Plus users to maintain service stability. The model improves on GPT-5.4's benchmark scores while using fewer output tokens — achieving a 56.7 on the AI Index with only 22 million output tokens, compared to Opus 4.7's 57 score requiring 111 million tokens. Terminal Bench hit 82.7%, though SWE-Bench Pro improved only slightly from 57.7% to 58.6% versus Opus 4.7's 64.3%. A standout feature is that Codex itself was used to analyze weeks of production traffic and write custom heuristic algorithms to optimize GPU work partitioning, boosting token generation speeds by over 20%. A 3D dungeon arena demo built with Codex using TypeScript and Three.js was highlighted, with third-party tools generating character meshes and textures. Pricing is set at $5 per million input tokens and $30 per million output tokens — roughly double GPT-5.4's rates, though OpenAI claims efficiency gains offset the increase. ARC-AGI 3 scores are absent from published benchmarks. A system card is available at deploymentsafety.openai.com/gpt-5-5.

Comments: Users note the gradual rollout follows familiar OpenAI patterns, with Pro/Enterprise access first and some already seeing it in Codex CLI. The token efficiency story draws most attention: GPT-5.5 achieves near-parity with Opus 4.7 on the AI Index using roughly one-fifth the output tokens, flagged as more meaningful than raw scores. Benchmark cherry-picking concerns arise — GPT-5.5's SWE-Bench Pro (58.6%) lags Opus 4.7's 64.3%, and ARC-AGI 3 is conspicuously absent. Codex being used to recursively optimize its own serving infrastructure is called the most technically interesting part, with users hoping agentic performance work expands. Pricing draws criticism: Codex developer docs reveal tighter rate limits alongside the 100% price increase, despite efficiency claims. The Three.js dungeon game demo prompts comparisons to the Flash era for web-based AI game generation. Other concerns include missing MCP support in the GPT desktop app, privacy/data transparency, and speculation about Google's Gemini 3.5 at I/O in May. One commenter cites Solow's productivity paradox, questioning whether AI efficiency gains will produce real economic output.

GitHub experienced a multi-service outage on April 23, 2026, beginning around 16:12 UTC when degraded availability was reported for Copilot and Webhooks. By 16:19 UTC, multiple services were listed as unavailable, and by 16:34 UTC, Actions was confirmed degraded. The root cause was identified at 16:52 UTC, with Actions and Copilot mitigated by 17:03 UTC, most remaining services validated by 17:04 UTC, and Webhooks confirmed normal at 17:10 UTC. The incident was fully resolved at 17:30 UTC — roughly 78 minutes after initial reports. Affected services included Webhooks, Actions, and Copilot. GitHub committed to publishing a detailed root cause analysis at a later date.

Comments: The outage prompted widespread frustration, with users noting GitHub's status page showed green throughout much of the incident even as services were clearly failing — including CI jobs running for 10 minutes before randomly failing. Third-party uptime tracking showed GitHub at 88.15% overall, with the best individual component at 99.78%, barely clearing two nines. At least one user cancelled a Copilot Pro+ subscription (£160 refunded) citing both the removal of Claude Opus 4.6 and repeated downtime. Several users reported feeling vindicated in recent migrations to self-hosted alternatives like Forgejo, Gitea, and GitLab, citing better uptime, speed, and free CI runners. Others raised developer lock-in concerns — noting that even if they wanted to leave GitHub, the network effect makes it impractical. Criticism of Microsoft's stewardship was common, with some attributing increasing instability to AI-generated content volume. A few users noted Vercel has also seen more downtime recently.

France Titres (ANTS), the French agency managing passports, driver's licenses, national ID cards, and immigration documents, detected a security breach on April 15, 2026, affecting user accounts on its ants.gouv.fr portal. Potentially exposed data includes login IDs, full names, email addresses, dates of birth, and unique account identifiers, with postal addresses, places of birth, and phone numbers exposed for some users. A threat actor using the moniker "breach3d" claimed responsibility on April 16, alleging 19 million records were stolen and offering them for sale at an undisclosed price — though no broad leak has occurred yet. ANTS confirmed the exposed data does not enable unauthorized portal access, but warns it can enable phishing and social engineering attacks, urging users to treat suspicious SMS, calls, or emails with extreme caution. The agency has notified France's data protection authority (CNIL), the Paris Public Prosecutor, and the national cybersecurity agency (ANSSI).

Comments: Commenters broadly express frustration that repeated government data breaches carry no meaningful penalties beyond apology notifications, with several noting their data had already been exposed in prior French government breaches — including one involving the unemployment benefits agency. Many argue society should pivot from breach prevention toward redesigning identity verification systems, pointing to digital identity approaches in the Netherlands, Japan, and India as models. The irony is widely noted that ANTS — which demands extensive identity documentation from citizens — failed to protect that very data, and critics point out the breach data should have been stored encrypted rather than in plaintext. Several commenters draw broader conclusions about government technical competency, arguing incidents like this undermine proposals for mandatory government-run age or identity verification systems. A few speculate that AI-assisted hacking may paradoxically reduce software adoption, while one commenter notes the timing is ironic given France's recent announcements about migrating away from US tech firms.

MeshCore, launched in January 2025, has grown rapidly to 38,000+ nodes and 100,000+ active users across Android and iOS. Team member Andy Kirby heavily used Claude Code to "vibe code" — majority AI-generated — standalone devices, mobile app, web flasher, and web config tools without disclosing this to the core team, which had been wary of AI-generated code. More critically, Andy secretly filed for the MeshCore trademark on March 29 without notifying anyone, prompting a complete breakdown in communication. The core team disputes Andy's "official" branding claims, asserting the true official MeshCore is the GitHub repository, to which Andy has never contributed. Andy controls the meshcore.co.uk domain and original Discord server, while the remaining team — Scott, Liam, Recrof, FDLamotte, and Oltaco — launched meshcore.io as their official home; Andy subsequently copied the new site's design using AI despite being asked not to. The team describes the situation as "a slap in the face" and remains committed to human-written firmware, bug fixes, and community management through the new site and a fresh Discord server.

Comments: Users note that mesh networking hype around both MeshCore and Meshtastic seems overblown, particularly for emergency/off-grid scenarios, with most real-world use being simple text messaging rather than a robust resilient network. Reticulum is recommended as a more architecturally sound alternative for distributed networking at the protocol level, with hardware like the LilyGo T-Echo and the Columba companion app offering a polished experience including file and image transfers. Trademark enforcement is criticized as a recurring problem in mesh projects, with Meshtastic's rules cited as another example of overly restrictive policies — and users point out this dispute was a predictable outcome. The client app being closed source is flagged as a non-starter and seen as a structural red flag that contributed directly to this conflict. Users broadly affirm that while AI-assisted development has legitimate value, disclosure is essential because AI- and human-written code differ meaningfully, and concealment erodes community trust.

LILYGO's T-Watch Ultra is a hackable smartwatch built around an ESP32-S3 dual-core Tensilica LX7 (240 MHz), 16MB flash, and 8MB PSRAM — more memory than typical hobbyist wearables and enough for edge AI tasks via built-in vector instructions. Its 2.01-inch AMOLED (410×502) with capacitive touch, IP65 weatherproofing, and 1,100mAh battery address the durability gap that has made prior DIY smartwatches impractical. Connectivity spans Wi-Fi, Bluetooth 5.0 LE, a Semtech SX1262 LoRa transceiver for off-grid/Meshtastic use, u-blox MIA-M10Q GNSS, and ST25R3916 NFC. A Bosch BHI260AP handles motion and AI sensor fusion; additional hardware includes a DRV2605-driven vibration motor, MAX98357A audio amplifier, microphone, RTC, microSD slot, and AXP2101 power management — all accessed via USB-C. Compatible with Arduino, MicroPython, and ESP-IDF, it targets hackers wanting a capable, programmable platform without building from scratch. Pre-orders launched at $78.32 across three variants, all of which sold out quickly.

Comments: Users question whether "hackable pre-built module" is accurately called "DIY," with some pointing to true from-scratch builds (custom PCB, 3D-printed case) as the real benchmark. ESP32's power consumption is flagged as a poor fit for a watch communicating regularly — a surprising choice given the form factor. No battery life benchmark is provided, which commenters find notable. Some users articulate a minimal-feature wishlist — O2 monitoring, motion tracking, week-plus battery, basic waterproofing — and note that no current device fully satisfies it; the new Pebble comes close but lacks an O2 sensor. The Bosch BHI260AP's inclusion confirms accelerometer support, relevant for sports-tracking use cases like rowing. Broader sentiment reflects a long-standing desire for an open-source Garmin-equivalent with serious running and cycling metrics, something commenters expect may still be years away despite partial progress from Coros and Amazfit. Pre-orders across all three variants sold out rapidly after announcement.

Tailscale co-founder David Crawshaw is launching exe.dev, motivated by deep frustrations with cloud primitives. His core argument: VMs are the wrong abstraction because they bundle CPU, memory, and disk rather than letting users buy raw compute and carve it up. Cloud storage economics broke when SSDs cut seek times from 10ms to 20 microseconds, making remote block device overhead balloon from ~10% to over 10x vs. local storage, yet cloud pricing never adapted. Egress runs ~10x above data center rates by design to enforce lock-in. Kubernetes cannot fix these problems because it's an abstraction layered on broken abstractions. The AI/agent era amplifies the pain — more software means more compute demand, and every token an agent wastes contorting cloud APIs is wasted context. exe.dev's solution: a flat $20/month buys 2 CPUs, 8GB RAM, and 25GB disk, splittable into up to 25 VMs, backed by local NVMe with async off-machine block replication, built-in TLS and auth proxies, and anycast networking globally.

Comments: Community reception is mixed. Many share stories of abandoning Kubernetes for single Debian VMs with Kamal/Docker, citing cost reductions and fewer incidents, though K8s defenders argue it's appropriate at scale. Hetzner is cited repeatedly as cheaper, with users reporting 1/10th the cost of managed cloud databases. Skeptics question exe.dev's differentiation from bare-metal VPS providers, noting its DNS resolves to Amazon AWS IPs. The promised egress savings are challenged: exe.dev's $0.07/GB pricing isn't materially cheaper than hyperscalers. The flat-rate model draws praise from SaaS builders wanting predictable costs. Security concerns arise around non-technical users deploying backends without understanding attack surfaces. European users flag demand for sovereign clouds outside US jurisdiction. Competing projects cited include shellbox.dev (scale-to-zero SSH VMs), clawk.work (Firecracker VMs), and self-hosted Firecracker on auctioned Hetzner hardware. Some critics argue the piece conflates Kubernetes shortcomings with cloud problems, predicting exe.dev will reinvent the abstractions it criticizes.

Honker is a SQLite extension and Rust crate bringing Postgres-style NOTIFY/LISTEN to SQLite without Redis or a separate broker. It monitors the WAL file via stat(2) at 1ms intervals instead of polling, delivering cross-process notifications in single-digit milliseconds. Three primitives are provided: ephemeral pub/sub, durable work queues with retries and dead-letter tables, and event streams with per-consumer offsets. All are row inserts inside transactions, enabling atomic commits with business writes -- rollback drops both together. The queue supports priority, delayed jobs, visibility timeouts, exponential backoff, named locks, rate-limiting, and crontab-style periodic tasks. Event streams track per-consumer offsets with at-least-once delivery and replay from any saved offset. WAL mode is required and the design is single-machine/single-writer, not intended for multi-server replication or DAG orchestration. Bindings exist for Python, Node.js, Rust, Go, Ruby, Bun, Elixir, and C++, all wrapping the same loadable extension. Modeled after Huey, pg-boss, and Oban, it is currently alpha with a potentially changing API.

Comments: The creator explains the name change from litenotify/joblite to honker, citing whimsical naming traditions in the MQ space (Oban, Huey, Celery, Sidekiq). Users highlight atomic commits as the key differentiator over separate IPC, since external messaging always risks the "notification sent but transaction rolled back" problem. Technical questions center on WAL checkpoint behavior -- whether stat() polling handles file shrinkage back to zero and whether events could be lost during checkpoints. The creator explains choosing stat(2) over FSEvents/kqueue because Darwin silently drops same-process notifications, making kernel file-watch unreliable when publisher and listener share a process. Users ask whether platform IPC would be faster for the non-durable path on the same machine, and raise practical questions about SQLAlchemy compatibility and Litestream coexistence. One commenter notes PostgreSQL 19 is adding optimized LISTEN/NOTIFY with selective signaling to improve scalability when many backends listen on different channels.

NYPD officer James Giovansanti, 33, accumulated 547 speed camera and red-light tickets since 2022 on his 4,800-pound RAM 1500 truck across Staten Island, averaging one ticket every other day in 2025. Cameras caught his truck exceeding 41 mph near P.S. 22 elementary school and Port Richmond High School, with 20 red-light violations, some logged within a minute of simultaneous speeding tickets. His truck's flat-faced hood raises pedestrian fatality risk, and visible right-side damage is notable. Policing expert Michael Alcazar called the pattern evidence of "indifference to public safety" warranting serious discipline, but NYPD dismissed action, calling tickets unrelated to his duties. Camera violations carry no license points under New York law, so Giovansanti faces no suspension as long as he pays the $36,650 in fines. His precinct inspector oversees 33 other officers with multiple camera tickets. Advocates back Albany's "Stop Super Speeders Act" mandating speed limiters for repeat offenders, but Assembly Speaker Carl Heastie's opposition clouds passage. Gov. Hochul backed the bill, citing a small group of reckless drivers causing disproportionate harm.

Comments: Users note the Dangerous Vehicle Abatement Program — which let DOT seize cars from repeat offenders — expired in 2023 after failed implementation, and its reinstatement would address cases like this. Many argue police should face higher off-duty standards given their authority, with some calling for abolishing qualified immunity as a deeper fix. Commenters note dark irony: Giovansanti appears to be one of few NYC cops who doesn't obscure his plate, which enabled his identification. Users clarify camera tickets carry no license points in New York, unlike officer-issued violations — a loophole shielding repeat speeders from suspension regardless of wealth. Several propose income-scaled or vehicle-value-scaled fines with escalating multipliers as a fairer deterrent, and some speculate Giovansanti may have leveraged his status to get fines waived. A few challenge the framing, noting 41 mph seems unremarkable in less-dense areas, while others cite data showing stopping distances more than double between 30 and 50 km/h. One commenter questions how deep doorstep journalism should go while broadly supporting public accountability reporting.

Apple released an iOS/iPadOS update fixing CVE-2026-28950, a bug where notification content for deleted or auto-expiring messages from apps like Signal was cached on-device for up to a month. The FBI exploited this to recover deleted Signal messages from a suspect's iPhone using forensic tools, because when Signal displayed a notification, the OS independently stored that plaintext in its own notification database — outside Signal's encryption and deletion controls. Signal president Meredith Whittaker publicly called on Apple to fix it. Apple classified it as a "logging issue addressed with improved data redaction" and backported the fix to iOS 18. The fix addresses failure to purge cached notifications when marked for deletion or when the originating app was uninstalled, but the broader issue — notification text transiting Apple and Google push servers in readable form — remains unresolved. Signal already offers a generic "You've received a message" notification mode that prevents message content from reaching the OS notification layer.

Comments: Users note the patched bug is only part of a larger problem: notification content from encrypted apps passes through Apple and Google push servers in plaintext, making it accessible to governments via legal orders without needing the physical device. The fix specifically addresses the OS failing to purge notifications when an app was uninstalled or messages marked for deletion (CVE-2026-28950). iOS has long retained logs and databases for unpredictable periods, and the OS notification layer operates outside the app sandbox and in-app encryption. Many recommend enabling Signal's "generic notifications" setting so message content never enters the OS notification system. Skepticism exists over whether this was a true bug or intentional access mechanism, with broader distrust of closed-source platforms for sensitive communications. Some warn the iOS 18 patch silently enables automatic upgrades to iOS 26, and others are frustrated Signal didn't proactively alert users or prompt them to enable privacy-preserving notification settings.

Economist Sam Peltzman's 2026 paper documents a 10-15 point post-2020 US happiness decline across all demographics — a "regime change" corroborated by Fed worker satisfaction and University of Michigan consumer sentiment both hitting historic lows. Despite strong employment and wages, the author argues feelings matter because they drive politics and policy. Three culprits emerge: cumulative inflation triple the historical norm since 2020 (hitting upper-income households hardest as full employment raised service costs), collapsing institutional and interpersonal trust alongside rising social isolation and algorithmically amplified negativity, and a permacrisis decade of pandemic, geopolitical wars, polarization, and AI/climate fears fueling historically negative news coverage. Anglophone countries show disproportionate declines tied to individualism, expanded mental health diagnostics, and toxic media ecosystems, while low-inflation southern European nations saw happiness rise. Quebec's French speakers experiencing smaller happiness declines than English-speaking Canadians partially confirms the Anglophone media hypothesis.

Comments: Commenters broadly confirm the affordability crisis, noting everyday US costs have become shockingly high across housing, food, and services. The post-COVID collapse of social life — lost third places, dispersed friend groups, and difficulty forming adult friendships — resonates as a primary driver. Many challenge the "America is rich" premise, arguing wealth is concentrated in a tiny minority while housing unaffordability, healthcare costs, and extractive corporate practices squeeze the majority. Religious community erosion draws anecdotal comparisons suggesting faith-based networks confer happiness advantages secular peers lack. The Anglophone media angle sparks theories about Murdoch-style sensationalism and foreign disinformation campaigns targeting English-language platforms. AI career uncertainty and collapsing meritocracy beliefs weigh on younger commenters, while others cite political polarization, institutional distrust, and loss of life stability as root causes that no amount of positive GDP data can offset.

CSS specificity ties force browsers to use source order as a tiebreaker, making overlapping states like hover-and-disabled unpredictable and fragile to extend. Tasty is a CSS-in-JS library replacing competing selectors with a declarative priority-ordered state map: developers list states from highest to lowest priority, and the compiler generates mutually exclusive selectors using :not() chains so no two branches ever match simultaneously. A disabled button gets a plain selector; :active adds :not([disabled]); hover adds :not(:active):not([disabled]); the default excludes all three. This eliminates source-order bugs and lets developers extend components without re-deriving the full selector matrix. Tasty supports pseudo-classes, attributes, media queries, container queries, root-level state, and typed APIs. Development took several years and hundreds of iterations to handle real design-system complexity. It powers Cube UI Kit (100+ components) and Cube Cloud enterprise product. Additional features include SSR, zero-runtime extraction, editor tooling, linting, tokens, and recipes. Best suited for complex long-lived component systems, not small landing pages.

Comments: The sole comment is from the author, who invites questions and frames the tool's motivation: CSS state resolution becomes opaque when states overlap, and extending existing components forces developers to mentally re-derive the entire selector matrix each time. The author solicits specific feedback on three fronts: what the model fails to cover that users would expect, whether the syntax feels natural or confusing, and edge cases or complex selector scenarios that might trip up the compiler or be impossible to express within the model. The author frames it as an open AMA covering the tool, its implementation, and design choices made throughout development.

A developer wrote a series of blog posts documenting the construction of "paella," a C compiler implemented in Zig, following Nora Sandler's book "Writing a C Compiler." The project served dual purposes: learning Zig and filling time while unemployed. The writeup spans 10 chapters, progressing from foundational compiler concepts (intro, unary and binary operations) through control flow (logic, conditions, loops), variables, blocks, functions, and linking. The author notes they plan to continue posting writeups if they resume working through the book. Commenters note that the author appeared to later abandon the project around chapter 19 due to frustration with lower-level language challenges in Zig.

Comments: Commenters observe the author eventually quit around chapter 19, seemingly fed up with lower-level language friction. One pushes back on the idea that compiler implementation inherently requires low-level features, arguing a compiler is fundamentally a text-to-text translation tool — Pascal compilers have long been written in Pascal — and the only truly low-level need is writing bytes to a file, which any language supports. Another asks whether Zig already ships a built-in C compiler or merely integrates an external one through its build system. A fourth commenter sees the project as well-aligned with Zig's early "maintain it in Zig" philosophy, and wonders whether efforts like this could eventually reduce Zig's toolchain dependency on Clang/LLVM for its C frontend.

Jiga is a B2B manufacturing sourcing platform that connects engineers directly with vetted manufacturers, consolidating quoting, communication, and order tracking in one place with AI-powered administrative workflows. The platform targets the pain points of traditional parts sourcing — weeks-long email chains, fragmented spreadsheet tracking, customs complexity, and repeated supplier Q&A — compressing that cycle from weeks to hours. Clients reportedly include NASA, Tesla, and Google. The company describes itself as cashflow positive and growing revenue 3x year-over-year, with no reliance on emergency fundraising. Culturally, Jiga operates fully remote and async, holds only a weekly all-hands and one team sync, and flies the entire team to an annual offsite. Decision-making is pushed to whoever is closest to the problem, with no approval chains. The company emphasizes radical internal transparency: all team members see revenue, runway, valuation, and the sales pipeline. Hiring philosophy centers on senior, self-directed talent who ship fast and iterate in production rather than debate theoretical solutions.

Comments: Nothing to summarize!

Researchers found Firefox's indexedDB.databases() returns database names in an order derived from internal hash table structure, not creation order, creating a stable process-lifetime fingerprint. The root cause is in ActorsParent.cpp: private browsing maps names to UUIDs in a global StorageDatabaseNameHashtable shared across all origins, then iterates results from an unsorted nsTHashSet, exposing bucket order as a deterministic identifier. Because the mapping is process-scoped, unrelated websites can independently observe identical permutations to link activity across domains without cookies. With 16 controlled names, the fingerprint space reaches ~44 bits (16! permutations), enough to uniquely identify concurrent browser instances. In Firefox Private Browsing, the identifier persists after all private windows close; in Tor Browser, it survives "New Identity," defeating its core isolation guarantee. The fix — lexicographically sorting results before returning — was released in Firefox 150 and ESR 140.10.0 as Mozilla Bug 2024220; Qubes-Whonix users are reportedly unaffected.

Comments: Commenters question why a fingerprinting company would responsibly disclose a vulnerability that benefits competitors, and note that disabling JavaScript prevents exploitation entirely. Several observe the threat resets on browser restart, limiting attacker utility, while others note this "pseudonymizes" rather than fully deanonymizes Tor users, recommending Whonix and Qubes-Whonix for high-threat users. Readers question why the global StorageDatabaseNameHashtable is shared across all origins, arguing per-origin hash tables would be a cleaner fix than sorting output. Some advocate requiring user permission before exposing IndexedDB metadata, while others argue against expanding web standards APIs, favoring minimal browser primitives to reduce fingerprinting surface. Tails and Qubes-Whonix users note they are unaffected. One commenter shares a Wayback Machine archive for readers concerned about being fingerprinted by the disclosure host itself, and several note portions of the article read as AI-generated.

Citizen Lab uncovered two surveillance campaigns exploiting known vulnerabilities in global telecom infrastructure, with vendors operating as "ghost" cellular providers to geolocate targets via legitimate networks. The campaigns abused SS7 — the 2G/3G backbone protocol lacking authentication and encryption — and Diameter, its 4G/5G successor, which remains exploitable when providers skip protections or fall back to SS7. Three telecoms served as repeated surveillance entry points: Israeli 019Mobile, British Tango Networks U.K., and Airtel Jersey (now Sure). Sure denied knowingly enabling tracking; 019Mobile said it could not confirm the identified infrastructure was theirs. The first campaign combined SS7 and Diameter exploits against targets worldwide over several years, implying multiple government clients; clues point to an Israeli-based geo-intelligence firm. The second used SIMjacker attacks — silent SMS commands sent directly to a target's SIM card that invisibly convert the device into a location tracker. Researcher Gary Miller called SIMjacker fairly common but geographically targeted, and emphasized these two campaigns are a fraction of millions of global attacks.

Comments: Users contrast the bureaucratic hurdles emergency services face to obtain location data — affidavits, faxes, hours of legal review — with the ease surveillance vendors exploit the same protocols. The NSA's "LOVEINT" scandal is cited as evidence that state-level surveillance is routinely abused for personal reasons. A key technical point: even 5G users remain vulnerable via SS7 downgrade attacks due to backward network compatibility; a data-only SIM with encrypted apps is the only reliable mitigation. Personal accounts describe stalker ex-partners using telco employee access to track victims across new SIMs and devices, with police dismissing complaints. Russia's model — location data sold on black markets and cross-referenced with cameras and Wi-Fi logs — is raised as a likely global trajectory. Concern is voiced that journalists in conflict zones may be tracked to their deaths via such methods. Users also note that widespread acceptance of Meta and Google location tracking makes commercial surveillance easier, and that Israeli firms have reportedly refined these techniques through deployment in Gaza and Lebanon.

Arch Linux has achieved a bit-for-bit reproducible Docker image, distributed under a new "repro" tag, following a similar milestone for its WSL image earlier this year. The key caveat is that pacman's keyring is stripped from the image to ensure reproducibility, so users must run pacman-key --init && pacman-key --populate archlinux before installing packages — either interactively or via a Dockerfile RUN statement; Distrobox users can handle this via a pre-init hook. Reproducibility is verified through digest equality across builds using podman inspect and the diffoci tool. The main technical challenges involved setting SOURCE_DATE_EPOCH and honoring it in OCI image labels, removing the non-deterministic ldconfig auxiliary cache file, and normalizing timestamps during docker build/podman build using --source-date-epoch and --rewrite-timestamp flags. The rootFS build system is shared with the WSL image. Future plans include setting up an automated rebuilder to periodically verify the image's reproducibility status and publish build logs publicly.

Comments: Users welcome the achievement, noting that reproducible builds provide mostly emotional payoff until a real incident occurs — one commenter recounts an afternoon lost bisecting a three-byte timestamp delta between two "identical" images. There is broad agreement that all Docker images should have been reproducible from the start, with apt-get update in build steps called an anti-pattern. Some suggest using "OCI Image" terminology since these images work with Podman equally well. Reproducible builds are seen as especially important for firmware, certification, and safety-critical applications, with hope that other distros will follow. A debate emerges over whether rolling-release distros like Arch and Alpine may outcompete declarative systems like NixOS, since install scripts are considered more powerful and less verbose. Supply chain attack risk on rolling-release distros is raised, with early adopters seen as canaries. One commenter frames this milestone in a longer arc, noting deterministic compiler output itself took five decades to achieve.

Ursa Ag, a Canadian startup, sells mechanically simple tractors using remanufactured Cummins 12-valve diesel engines — $129,900 CAD (~$95K USD) for 150hp and $199,900 CAD (~$146K USD) for 260hp, roughly half comparable John Deere pricing. The tractors avoid modern ECUs, DEF/DPF emissions systems, and proprietary software, making them repairable by any mechanic with basic tools. EPA regulations mandating DEF/DPF on farm equipment since 2014 are a key reason conventional tractors became complex and expensive — not solely corporate greed — and force this startup to remanufacture rather than source new engines. The Cummins 12-valve is among the most widely understood diesel engines in North America. The pitch resonates with right-to-repair advocates, arriving just as John Deere settled a right-to-repair lawsuit for $99 million. Ursa Ag targets independent farmers who never wanted the complexity large agri-industrial operations demand. One structural constraint is dependence on a finite supply of remanufactured and no-longer-available parts. Founder Wilson is quoted saying he "saw the gap and drove a tractor through it." Details and video are at ursa-ag.com.

Comments: Commenters support the right-to-repair ethos but question long-term sustainability — specifically how a company survives selling durable machines whose wear parts (the engine) come from Cummins, not Ursa Ag. Pricing skepticism is common: Kubota M-series starts around $70–100K and Belarusian MTZ tractors run ~$50K with comparable simplicity. Others stress that EPA emissions mandates since 2014 drove the shift to complex ECU-dependent systems, making the startup legally dependent on remanufacturing old engines — a point anti-lock-in advocates often overlook. The Cummins 12-valve gets consistent praise, with owners reporting reliable starts at -10°F past 250K miles. A dominant theme is demand for the same approach in cars, TVs, motorcycles, and appliances, reflecting broad fatigue with subscriptions and planned obsolescence. John Deere's remote-bricking of tractors stolen during the Ukraine invasion is cited as evidence that OEM software control poses systemic agricultural risk. Smart-home analogies run deep, with advocates arguing the real problem is lock-in, not electronics, and proposing modular open-standard approaches where software serves the machine rather than owns it.

A hand-crafted 5x5 pixel font designed for tiny microcontroller screens stores all characters in just 350 bytes, making it ideal for 8-bit devices like the AVR128DA28. Characters fit within a 5-pixel square and are safe on a 6x6 grid, with fixed monospace width simplifying programming since string length always equals 6 times the character count. The font is derived from lcamtuf's 5x6 font-inline.h, itself inspired by the ZX Spectrum's 8x8 font. The author argues 5x5 is the minimum no-compromise size: 4x4 can't render E, M, or W properly, while 3x3 is technically possible but unreadable. Smaller experimental variants are explored — 3x5, 3x4, 3x3, 2x3, 3x2, and 2x2 — with readability degrading significantly below 3x5. On real hardware, subpixel rendering creates a pleasing pseudo-dropshadow effect that improves legibility beyond what simulations suggest. The font compares favorably to antialiased vector fonts at similar scales, which require megabytes of code and data. Practical display targets are 160x128 or 128x64 OLEDs, where pixel-efficient hand-drawn fonts outperform software-rendered alternatives.

Comments: Users note the 5x5 size is misleading since character spacing requires a 6x6 grid, and full ASCII coverage is absent; the Spleen 5x8 font is suggested as an alternative. Proper typographic descenders need at least 7–8 vertical pixels, making the HD44780's 5x7 standard more practical for full Latin coverage. Alternative compact fonts discussed include Silkscreen, Gremlin-3x6, Tom Thumb (4x6), and MiniGent (3x4, supporting Greek, math, and emoji). Specific glyph weaknesses cited include unreadable lowercase 'g' and lowercase 't' resembling uppercase T. The CC BY-NC-SA license draws criticism for potentially blocking commercial device use. Subpixel and multi-level grayscale rendering are raised as techniques enabling even smaller legible text, with one demonstration achieving ~3 horizontal pixels per character using context-assisted inference. Historical precedents noted include HP-48G, C64 80-column software modes, Casio Organizers, and Atari ST. Real-world applications cited span LED signage, e-paper calendars, DJ hardware, and game mods. A rendering bug is noted in the 3x2 example where 'b' appears instead of 'p' in "can probably."

Isopod Site is a hobbyist resource dedicated to scientifically rigorous isopod identification, a group of crustaceans considered understudied relative to other invertebrates. Species identification is based on peer-reviewed literature rather than superficial visual similarities, aiming to reduce misidentification common in the keeping hobby. Macro photography, captured with an Olympus E-M10 Mark IV, Laowa 50mm 2:1 lens, and a DIY flash diffuser, documents key anatomical characters of each species; all images are copyrighted and automatically monitored across social media and websites for unauthorized use. The site covers isopods as low-maintenance pets, with growing hobby popularity driving practical keeping content. Advanced keepers can find guidance on selective breeding for unique morphs, where individuals with distinctive traits are isolated to establish new lineages. A taxonomic discussion section invites corrections on species placements, openly acknowledging that photo-based identifications carry inherent limitations.

Comments: Commenters broadly praise the site as a nostalgic callback to intimate, niche internet projects, with several bookmarking it and appreciating its educational depth. A prominent thread debates whether the photography is AI-generated, though others note the author is Nicky Bay (nickybay.com), a recognized macro photographer, and argue the suspicion arises precisely because the images are unusually flawless rather than from actual evidence of AI. One commenter highlights isopod "nuptial rides," which can last many days and involve competing males attempting to dislodge rivals. Several users note the rising popularity of isopods as pets, with some recounting coworkers or family members who keep terrariums. One parent mentions a child using the site as a drawing reference. Some users report being unable to load any photos on Firefox or Chrome, seeing only empty boxes. Additional recommendations include macro photographer Dany Bittel's Patreon for photography tutorials.

Nilay Patel argues that "software brain" — seeing the world as databases controllable via code — explains why tech loves AI while the public increasingly doesn't. Polls are stark: NBC News found AI with worse favorability than ICE, Quinnipiac found over half of Americans think AI will do more harm than good, and Gallup found only 18% of Gen Z hopeful (down from 27%), with angry respondents rising to 31%. Patel contends AI doesn't have a marketing problem — ChatGPT has 900 million weekly users — but that tech is asking people to "flatten themselves into databases," which is fundamentally backwards. He draws parallels between software brain and lawyer brain (both use formal structured language to guide complex systems), cites DOGE's failure as proof databases don't equal reality, and notes enterprise AI genuinely fits the model since businesses already operate as data loops. CEOs like Dario Amodei openly warning of mass job displacement, combined with demands to integrate AI into every life domain, creates helplessness Patel ties to political violence. His conclusion: computers must adapt to people, not the reverse.

Comments: Commenters largely push back on the article's central premise. Several argue that people actively want to automate tedious work — pointing to dishwashers and OpenAI's 90 million users as evidence — but that the key issue is reliability: automation must work consistently enough that verification doesn't cost more effort than doing the task manually. One commenter calls the piece poorly reasoned, arguing the author built a straw man since the claim "not everything should be automated" is not what AI proponents actually argue. Another notes that people care about second-order effects of AI — cheaper prices, greater flexibility — not the technology itself, and criticizes the piece for conflating enterprise automation with vague public "vibes" about AI. A fourth commenter flags a conceptual problem: lumping the longstanding concept of automation together with specific recent AI grievances muddles the analysis. The thread closes on a wry note — "the dream of automation will never die" — suggesting public resistance may be more nuanced than a blanket rejection of automation itself.

Ars Technica has published a reader-facing AI policy stating all reporting, analysis, and commentary is human-authored, with humans making every editorial decision. Reporters may use AI tools to assist research — navigating large volumes of material, summarizing background documents, and searching datasets — but AI output is never treated as authoritative and must always be verified. The creative team may use AI for certain visual material under human creative direction. Any staff member using AI bears full personal responsibility for accuracy, and AI must never be used to generate material attributed to a named source. The policy covers text, research, source attribution, images, audio, and video, stemming from two convictions: that AI cannot replace human insight, and that AI tools used well can help professionals do better work. The policy was prompted in part by a recent incident in which a reporter was fired after AI fabricated quotes in a published article.

Comments: Commenters are largely skeptical, pointing to a contradiction between claiming the outlet is "written by humans" while permitting AI for visual content and research. Many cite the recent firing of a reporter over AI-fabricated quotes — reportedly caused when AI hallucinated quotes from a blog blocking scrapers — arguing editorial leadership, not just reporters, bears responsibility. Critics contend permitting AI research tools while placing sole accountability on staff is self-defeating, since LLMs reliably produce plausible but subtly inaccurate summaries. The policy's individual-responsibility clause is seen as an abrogation of editorial oversight. Others raise concerns about AI degrading the content ecosystem it relies on for training data, with one user proposing a Spotify-style micropayment model for scraped content. Several view the policy as confirming the outlet has become a "slop shop," while a few acknowledge the verification requirement as reasonable. Comparisons are drawn to Crikey's stricter AI ban and its enforcement against a contributor.

A Turkish novelist reflects on initially dismissing Leylâ Erbil (1931–2013) before recognizing her significance. Erbil, of the "1950s generation" of Turkish modernists, is known for A Strange Woman (1971) and What Remains (2011), an experimental verse bildungsroman translated into English in 2024. What Remains follows narrator Lahzen through Istanbul's layered history of ethnic erasure and political violence, weaving personal trauma with Turkey's historical atrocities: the Armenian genocide, Dersim Kurdish massacres, 1955 pogroms, and the 2007 assassination of journalist Hrant Dink. Erbil's "Leylâ signs"—triplet commas and minimal capitalization—force readers to confront difficult history. Istanbul's stones and ruins serve as the novel's central metaphor, preserving what official history erases. The author ultimately concludes Erbil's autobiographical method was not self-indulgent but deeply political, collaging personal life with national history to dramatize centuries of state violence against minorities.

Comments: The sole commenter offers a brief, entirely dismissive reaction with no elaboration or specific criticism of the content.

Raylib 6.0, its biggest release in 12 years, adds 20+ API functions (total: 600), 70+ examples (total: 215+), and 2,000+ commits from 210+ new contributors. The headline feature is rlsw, a CPU-only software renderer implementing OpenGL 1.1+ as a single-file header library — enabling raylib on GPU-less devices like ESP32 and emerging RISC-V hardware with no user-side code changes. Three new platform backends arrive: a headless memory framebuffer (rcore_memory) for server-side rendering, a native Win32 backend removing GLFW/SDL dependencies, and a direct Emscripten web backend replacing libglfw.js. Fullscreen and High-DPI scaling were redesigned from scratch and tested across Windows, Linux (X11/Wayland), and macOS including 4K multi-monitor setups. The skeletal animation system gains blending between frames and across different animations with improved GPU skinning. The filesystem API was consolidated into rcore (removing the utils module) with 40+ functions, and a new text management API adds 30+ string utilities. A new rexm tool automates examples management. Development was funded by NLnet and NGI Zero Common Fund, with platinum sponsors puffer.ai and comma.ai.

Comments: Users are broadly enthusiastic about raylib 6.0, with the CPU software renderer drawing particular interest from those planning to test it on ESP32S3 hardware. Language pairing recommendations are popular: Odin is flagged as excellent for rapid prototyping with raylib, and Golang+raylib is praised for hitting a sweet spot between full engine convenience and building from scratch. One developer is building vectarine, a code-first framework/engine hybrid on top of raylib that adds hot reloading, integrated debugging, and asset management. A beginner asks what raylib lacks compared to Unreal and Unity, reflecting the project's expanding reach. Several users share personal connections to the project — including a Swift developer using C-interop to build a roguelike — noting that raylib helped them actually ship small complete projects. One user was inspired to start learning C and build their own zero-dependency renderer. Community humor surfaces around "Tsoding," a programmer known for live-coding raylib projects at speed, with multiple users anticipating a new 6.0 speedrun stream.