A Canadian company, referenced at ursa-ag.com, has entered the tractor market with deliberately simple, mechanically-straightforward machines powered by legacy Cummins 5.9 engines, targeting farmers frustrated by John Deere's software-locked, dealer-dependent repair ecosystem. Founder Wilson identified the market gap and built a business around it. The tractors avoid modern electronics, telematics, and proprietary diagnostics, appealing to farmers in Saskatchewan and Alberta who want equipment they can service themselves or have fixed by any local mechanic. The appeal directly connects to the right-to-repair movement, where John Deere has faced years of criticism for requiring expensive authorized dealer service even for routine repairs. Remanufactured engines likely sidestep modern emissions mandates like Diesel Exhaust Fluid (DEF), a system farmers widely detest. The model draws comparison to glider trucks (which substitute older powertrains to avoid emissions compliance) and to Slate's approach in the car market — selling stripped-down, owner-serviceable vehicles as a premium alternative to increasingly locked-down OEM products.
AI coding tools frequently over-edit — fixing a requested bug while unnecessarily rewriting surrounding code, renaming variables, and adding unrequested logic — a brown-field failure invisible to test suites that burdens code review. A study benchmarks 400 programmatically corrupted BigCodeBench problems using token-level Levenshtein distance and Added Cognitive Complexity to measure over-editing. Among frontier models, GPT-5.4 over-edits most severely (Levenshtein 0.395 reasoning) despite weak correctness (Pass@1 0.723), while Claude Opus 4.6 leads both correctness (0.912) and edit minimality (0.060). Reasoning models over-edit more by default but respond better to an explicit "preserve original code" instruction. Among four training methods on Qwen3 4B, SFT collapsed out-of-domain (Pass@1 0.458, 43% LiveCodeBench degradation); RL generalized cleanly with no catastrophic forgetting. LoRA at rank 64 nearly matches full RL, and the RL recipe scaled to Qwen3 14B with consistent gains across all metrics.
A 5x5 pixel font designed for tiny OLED displays fits all characters within a 5-pixel square (safe on a 6x6 grid) and occupies just 350 bytes, making it ideal for 8-bit microcontrollers like the AVR128DA28. Based on lcamtuf's 5x6 font-inline.h—itself inspired by the ZX Spectrum's 8x8 font—5x5 is argued as the minimum viable size: 4x4 cannot properly render "E", "M", or "W", while smaller sizes become unreadable. Monospace layout simplifies programming since string display length is always 6× the character count, eliminating overflow concerns. Lowercase letters are drawn one pixel shorter than uppercase for visual distinction, and subpixel rendering on color displays creates a pseudo-dropshadow effect. Progressively smaller variants are explored: 3x5 sacrifices M/W/Q legibility but gains 50% more columns; 3x4 loses distinct case; 3x3 loses numerals but letters remain recognizable; 2x3 and 3x2 produce largely unrecognizable output; 2x2 supports only digits as a cipher. A vector font at similar scale requires megabytes of code and data yet looks worse than the 350-byte hand-crafted result.
Google introduced its 8th-generation TPUs at Google Cloud Next, unveiling two purpose-built chips: TPU 8t for training and TPU 8i for inference. TPU 8t scales to 9,600 chips per superpod with 2 petabytes of shared HBM and 121 ExaFlops of compute, targets 97% productive compute time ("goodput"), and enables near-linear scaling to one million chips via the new Virgo Network fabric. TPU 8i features 288 GB HBM paired with 384 MB on-chip SRAM (3x prior gen), doubled ICI bandwidth at 19.2 Tb/s, a new Boardfly topology cutting network diameter by over 50%, and an on-chip Collectives Acceleration Engine reducing latency by up to 5x — delivering 80% better performance-per-dollar than the previous generation. Both chips run on Google's custom Axion ARM-based CPUs and achieve up to 2x better performance-per-watt over the prior Ironwood generation, supported by fourth-generation liquid cooling. The chips were co-designed with Google DeepMind to match KV-cache and parallelism demands of reasoning and MoE models, and support JAX, PyTorch, SGLang, and vLLM natively with bare-metal access. Both will be generally available later in 2026 as part of Google's AI Hypercomputer unified stack.
Martin Fowler reflects on Larry Wall's programmer virtues — hubris, impatience, and laziness — where Bryan Cantrill praises laziness as the driver of elegant abstraction: it takes hard work to make systems simple. LLMs fundamentally lack this virtue since work costs them nothing, leading to more code rather than better abstractions and feeding perverse vanity metrics like lines-of-code counts. Fowler illustrates this personally, simplifying a music playlist generator via YAGNI instead of reaching for a coding agent, questioning whether an LLM would have caught the same over-complication. Jessica Kerr's TDD-style agent prompting approach is highlighted: write agent instructions first, then add a reviewer agent to verify compliance — mirroring test-before-code discipline. Finally, Mark Little uses the sci-fi film Dark Star — where a crew member talks a sentient bomb out of detonating via Socratic doubt — as a metaphor for AI overconfidence, arguing AI is optimized for decisiveness but needs deliberate inaction explicitly designed in for high-stakes, irreversible decisions.
Version 2 of the Ground-Mounted Solar Energy in the United States (GM-SEUS) dataset grew from 2.9M to 3.4M panels and added a new rooftop arrays dataset. Using DuckDB with H3, Spatial, and Parquet extensions alongside GDAL 3.9.3 and QGIS 4.0.1, the author converts GeoPackage files to Hilbert-curve-ordered Parquet for efficient spatial querying. The three datasets contain 18,980 arrays, 3,429,157 panels, and 5,822 rooftop arrays, sourced from OSM, USPVDB, TZSAM, CECSFC, and others. Rooftop metadata is sparse: azimuth is null 89.6% of the time, tilt 90.6%, and with only ~5,800 records, geographic coverage has significant room for improvement. Array average capacity has risen sharply, from under 5 MWAC for most pre-2017 installations to 34 MWAC average in 2023, with max DC capacity exceeding 1.4 GW. Geographic heatmaps show solar concentration in California, Texas, and the Southwest. A notable clarification: the circular mirror patterns in the California desert near Ivanpah are solar thermal heliostats, not photovoltaic panels.
Penn State researchers spent three fruitless weeks chasing Florida thunderstorms in June 2024 before documenting the first directly-observed corona discharges from treetops in nature on their return trip north. Using a modified 2013 Toyota Sienna fitted with a custom telescopic UV instrument, the team sought corona discharges — miniscule electrical pulses emitted at tree leaf and needle tips causing the canopy to glow in ultraviolet — a phenomenon theorized for over 70 years based on anomalous electric field activity over forests during storms, but never confirmed outside the lab. The breakthrough came in a parking lot at the University of North Carolina at Pembroke, where instruments were trained on a sweetgum tree 100 feet away and later a loblolly pine, during a sustained nearly two-hour thunderstorm near Interstate 95. The findings, published in Geophysical Research Letters, confirm long-standing theory and raise new questions about corona-produced hydroxyl radicals and their potential role in atmospheric chemistry.
Zed has released parallel multi-agent support in its latest update, letting users run multiple AI agent threads simultaneously within a single window via a new Threads Sidebar. Each thread can use a different AI provider, access isolated worktrees, or span multiple repositories, with agents able to read and write across repos automatically. The sidebar provides at-a-glance thread management—starting, stopping, and archiving threads grouped by project. A redesigned default layout moves the Threads Sidebar and Agent Panel to the left, relegating the Project and Git Panels to the right to keep agentic workflows front and center; existing users must opt in. The feature runs at Zed's native 120 fps and is fully open-source. Zed frames the release around "agentic engineering," a term coined by CEO Nathan Sobo to describe blending human judgment with AI tooling rather than fully delegating to agents. Development involved stress-testing with hundreds of simultaneous threads and multiple UX iterations before shipping.
Bodega Cats of New York is a project documenting NYC's working bodega cats through individual profiles, a forthcoming photography book (120 photographs, 60+ stories, October 2026, Quarto Publishing), and commercial brand placement services in real stores. The project spotlights a legal conflict: New York State sanitary code prohibits animals in food establishments, meaning owners can be fined for cats that have lived in their stores for years. Two bills—Int. 1471 at City Council and A08341 at State Assembly—are in committee to legalize bodega cats, driven by a 14,000-signature petition. A "Cats About Town Tours" walking tour series explores NYC's broader working cat history, from dock strays and federally-salaried post office cats to brewery cats that never missed a shift.
John Wanamaker's 1876 fixed price tags ended haggling; "surveillance pricing" now leverages purchase history, location, and demographics to charge individuals different prices for the same goods — as seen with Ticketmaster, Uber, Orbitz, Princeton Review charging by ZIP code demographics, and Instacart's 23% price variance between customers. Information asymmetry heavily favors corporations via data brokers and real-time algorithms. New York's 2025 Algorithmic Pricing Disclosure Act mandates labeling algorithmically set prices, but disclosure alone leaves consumers informed yet unprotected. Federal bills adding FTC enforcement and private civil action rights face long odds in a divided Congress. The National Retail Federation challenged New York's law on First Amendment grounds citing Sorrell v. IMS Health (2011), which held data mining for marketing is protected speech; the Southern District dismissed the suit, with appeal pending. Sorrell complicates outright bans but courts apply lenient standards to disclosure mandates. Effective reform must address upstream drivers: data brokers, behavioral advertising, and consumer segmentation.
GitHub CLI introduced opt-out telemetry in version 2.91.0, enabled by default for non-enterprise users. Each payload includes command name, flags, OS, CPU architecture, a persistent device UUID, a per-invocation UUID, TTY status, timestamp, and CLI version. GitHub states the goal is evaluating feature adoption to prioritize development — identifying underused subcommands or heavily used flag combinations. Users can inspect the JSON payload without transmitting it via GH_TELEMETRY=log, and disable collection via GH_TELEMETRY=false, DO_NOT_TRACK=true, or gh config set telemetry disabled; env vars take precedence over config. Data goes to GitHub's internal analytics infrastructure under the GitHub General Privacy Statement. Third-party CLI extensions manage their own telemetry separately and are unaffected by this opt-out, while enterprise users have telemetry disabled by default. The term "pseudonymous" is notable — a persistent device UUID means events are linkable across sessions, which is identifiable rather than anonymous.
The author scored 500 Show HN submissions for AI design patterns using Playwright with deterministic CSS/DOM checks, finding 67% exhibit detectable AI-generated aesthetics. Show HN submissions tripled recently, driven largely by Claude Code, prompting HN moderators to introduce /showlim restrictions — which also explains a March 2026 chart dip. AI design patterns fall into four categories: fonts (Inter, Space Grotesk, Geist, Instrument Serif), colors (purple gradients, dark mode with grey body text, barely-passing contrast), layout quirks (centered hero, badge above H1, colored card borders, icon-topped feature grids, numbered steps, stat banners), and CSS patterns (shadcn/ui, glassmorphism). Results show 21% "heavy slop" (5+ patterns), 46% mild (2–4), and 33% clean (0–1), with a 5–10% false-positive rate. The author views this as uninspired rather than harmful — pre-AI, everything looked like Bootstrap — but notes a meaningful difference between deliberate design and shipping LLM defaults. The post was human-written with AI-assisted scoring, and closes by questioning whether design will matter as AI agents become the primary web users.
Florida orange production fell 95% by 2026 (242M→12M boxes) as HLB (citrus greening disease) now infects 100% of trees; the bacteria, spread by the Asian citrus psyllid since 1998, silently destroys vascular systems 3-5 years before symptoms appear. No cure exists: OTC (oxytetracycline) injections slow progression but need reapplication every four months, and a GMO-resistant tree is 10-18 years from commercial deployment. Decades of glyphosate overuse starting in the 1970s weakened root systems, and five major hurricanes after 2017—Irma, Ian, Idalia, Helene, and Milton—spread the psyllid statewide. Deregulation under governors Scott and DeSantis enabled sprawl to consume former grove land, while processing plants shrank from 53 in 1977 to just four today. Tropicana, sold to French PE firm PAI Partners in 2022, nears bankruptcy from debt-loading and shrinkflation; Alico, America's largest citrus grower, quit in 2025, cutting 35% of Florida's output in one announcement. Supply now comes mainly from Brazil and Mexico, but Brazil's trees are already 47.63% HLB-infected, threatening global supply.
Columnar storage can be understood as an extreme form of database normalization rather than a purely low-level encoding detail. In row-oriented storage, each record's fields sit contiguously on disk, making row lookups fast and inserts cheap, but requiring full-row reads even for single-column queries like computing a histogram. Column-oriented storage inverts this: each column is stored as its own contiguous array, enabling efficient column scans but making row reconstruction costly. The conceptual leap is that columnar data maps directly to a set of normalized relational tables, each containing a single attribute and a primary key — where the "key" is simply the ordinal array index. Reconstructing a full row from columnar storage isn't merely analogous to a relational join — it structurally is one, with positional index as the implicit join key. This reframing unifies query operations like projections and joins with low-level format manipulation, suggesting columnar storage and database normalization operate on the same conceptual continuum.
GPS converts time into distance: satellites broadcast at light speed, and receivers multiply travel time by c to get distance to each satellite. One satellite places the receiver on a ring where its signal sphere intersects Earth; a second narrows it to two points; a third resolves to one via trilateration. Phone clocks drift by microseconds—1 µs equals ~300 m of error—so a fourth satellite solves for clock offset as a fourth unknown, snapping all four spheres to a single intersection. Special relativity slows satellite clocks ~7 µs/day due to orbital speed; general relativity speeds them up ~45 µs/day in weaker gravity, netting +38 µs/day. Engineers pre-compensate by tuning oscillators to tick slightly slow on the ground; without this, GPS would drift ~10 km/day. Modern receivers track 8–12 satellites from GPS, GLONASS, Galileo, and BeiDou, selecting geometry that minimizes Geometric Dilution of Precision (GDOP) by spreading satellites across the sky for sharp ring intersections. Multipath error from signals bouncing off buildings remains GPS's hardest urban challenge.
A YouTube creator built a clean room inside a backyard shed and used it to manufacture DRAM from scratch, attracting wide attention online. The process mirrors industrial semiconductor fabrication, involving heating to 1100°C, diffusion, ion implantation, and layered deposition. A companion video on the same channel documents the clean room's construction from a basic shed, including a positive-pressure filtration system to minimize particulate contamination. The setup was inspired by HackerFab, an open-source collection of tools and resources for hobbyist chip manufacturing. While the video explains DRAM fundamentals — including capacitor charging, leakage, and periodic refresh cycles — the finished RAM is never actually shown being tested on-screen. The video has been praised for its accessibility and technical depth, drawing comparisons to chemistry-education channels like NileRed.
Tim Cook is stepping down as Apple CEO after 15 years, moving to executive chairman while hardware chief John Ternus takes over — a stark contrast to Steve Jobs's illness-forced exit in 2011. Cook leaves with Apple firing on nearly all cylinders: a record iPhone 17 lineup, the $600 MacBook Neo selling faster than chip supply allows, dominant AirPods, and Apple Silicon success. John Gruber frames Cook as the ultimate company steward — never a product visionary, but an operations genius who grew Apple into the world's most valuable company, avoided scandal, and engineered a drama-free succession. Ternus, a 25-year Apple veteran Cook calls "the mind of an engineer, the soul of an innovator," is seen as the product-focused leader Apple now needs. Cook's executive chairman role will center on global policy and geopolitics. Critics note Apple under Cook under-invested in software — particularly for iPad and Vision Pro — favoring iterative hardware over transformative new platforms. Cook's legacy is framed not by any single product but by Apple's institutional stability, with the orderly transition itself being his final, fitting act.
On x86, xor eax, eax dominates for zeroing registers, favored over mov eax, 0 for smaller encoding and over sub eax, eax despite identical byte size. Ironically, sub has better flag behavior — it clears AF while xor leaves it undefined. Raymond Chen attributes xor's victory to a swarming effect: a slight early popularity edge caused compiler writers to adopt it, which led developers to mimic compiler output. Intel later added front-end detection for both xor r,r and sub r,r, renaming destinations to an internal zero register, taking effectively zero cycles and breaking input dependency chains. Concern that other CPU vendors may only special-case xor keeps it the preferred choice. Chen notes a colleague's habit of using sub r,r made their assembly authorship identifiable by style alone. On Itanium, the xor trick fails because mathematical operations don't reset the NaT bit, though Itanium's dedicated zero register makes the trick unnecessary.
Anker announced custom silicon called "Thus," described as the world's first neural-net compute-in-memory AI audio chip, designed to be smaller and more power-efficient than traditional chips. Unlike conventional AI chips that store models separately from computation—requiring constant data transfer per inference—Thus places computation directly where the model lives, eliminating that overhead. This allows handling several million parameters versus the few hundred thousand supported by prior earbud chips, enabling more capable AI within tight size and power budgets. The first application will be Soundcore's upcoming flagship earbuds, featuring eight MEMS microphones and two bone conduction sensors for cleaner call audio across noisy environments. Likely candidates are the Liberty 5 Pro Max ($229.99) and Liberty 5 Pro ($169.99), expected to compete with Apple AirPods Pro 3 and Sony WF-1000XM6. Full product details and additional AI-powered features will be revealed at Anker Day on May 21.
Broccoli converts Linear tickets to GitHub PRs using Claude and Codex CLIs, running entirely in the operator's own GCP project. It uses two Cloud Run workloads (a FastAPI webhook service and a Cloud Run Job runner) over shared Postgres with secrets in GCP Secret Manager. Assigning a Linear issue to a dedicated bot user triggers the pipeline; Claude and Codex also review PR diffs and can push fix commits on request. Deployment takes around 30 minutes via a bundled Codex skill or an 11-step manual path, requiring a GitHub App, dedicated Linear bot user API key, and active OpenAI and Anthropic keys. Bootstrap is idempotent, auto-generates webhook secrets and the database URL, but operators must manually add four secrets to Secret Manager: GitHub PEM, Linear API key, OpenAI key, and Anthropic key. Routing requires the issue be assigned to the bot plus a label matching an enabled repo_key; estimates between 0 and 3 trigger a small-estimate path. Webhook deliveries are durably recorded in Postgres for auditability; dedupe prevents double-processing on redelivery. No hosted version is offered; a closed-source variant exists on separate infrastructure but is unpublished.
DuckDB v1.5.2 is a patch release featuring bugfixes, performance improvements, and support for the newly stable DuckLake v1.0 lakehouse format, which introduces data inlining, sorted tables, bucket partitioning, and Iceberg-compatible Puffin deletion buffers with full backward compatibility. The Iceberg extension gains support for the GEOMETRY type, ALTER TABLE statements, updates and deletes on partitioned tables, and truncate/bucket partitions. In a new collaboration with Jepsen, a bug in INSERT INTO conflict resolution on primary keys was discovered and already fixed in this release. The WebAssembly shell at shell.duckdb.org was overhauled to support file storage via the .files command, enabling drag-and-drop uploads, file creation via COPY...TO, and in-browser downloads. Benchmarks on an r8gd.8xlarge instance (32 vCPUs, 256 GiB RAM, NVMe SSD) show a ~10% TPC-H QphH@Score improvement — from 778,041 to 854,676 — when moving from Ubuntu 24.04 LTS to Ubuntu 26.04 beta. Upcoming events include DuckCon #7 in Amsterdam on June 24, an AI Council talk on May 12 teasing a "super-secret next big thing," and an Ubuntu Summit talk in late May.
Tokenmaxxing has startup CEOs celebrating massive AI compute bills as a productivity signal—Swan AI's four-person team spent $113k/month on Claude, framing it as a substitute for sales, engineering, and legal staff. Meta tracks token usage on internal "Claudenomics" leaderboards as an employee innovation proxy, while some startups pursue the "one-person billion-dollar company" model; one telehealth startup with two employees and seven contractors projects $1.8B in revenue. Founders argue AI spend scales output exponentially versus linear hiring costs, deliberately structuring entire company functions as agent pipelines. Salesforce countered with "Agentic Work Units" to tie AI spend to real work. Left largely unaddressed: OpenAI and Anthropic are losing money on underpriced compute, AI loops have burned thousands on useless tasks, and AI-generated "workslop" routinely needs costly human cleanup—all raising hard questions about whether token-heavy models are financially sustainable or genuinely productive.
A data engineer with 10+ years posted a candid, drunken Reddit essay on career lessons, preserved after the original account was deleted. Changing companies advances careers fastest; be selectively honest with managers; if on-call wakes you more than once per quarter, fix it or quit. Good code is readable by juniors, great code by freshmen, but best code is no code; SQL is the most lucrative skill to learn; tech stacks matter less than ~15 core engineering patterns. Documentation is the most underrated skill, TDD is cult-like, and full-stack developers are underpaid given their breadth. Titles matter less than accomplishments, managers have less power than engineers assume, and late-career title decreases create salary room. Financial advice: max out 401ks and don't tie self-worth to compensation. On data engineering: Airflow is ubiquitous but poor, most companies don't do streaming, and ML projects fail at high rates. The author credits Reddit for lifting them from minimum-wage work to a six-figure career, and closes citing Conan O'Brien's advice — be kind, work hard — as a genuine life philosophy.
MuJoCo (Multi-Joint dynamics with Contact) is a Google DeepMind-maintained general-purpose physics engine targeting robotics, biomechanics, graphics, animation, and machine learning. Its high-performance C API operates on preallocated data structures compiled from XML model definitions, with an interactive OpenGL-rendered GUI and extensive physics utility functions. Precompiled binaries target Linux (x86-64 and AArch64), Windows, and macOS; Python bindings install via pip install mujoco and target manylinux2014. Beyond Python and C, bindings exist for JavaScript/WebAssembly, C#/Unity, MATLAB Simulink, Swift, Java, Julia, and Rust, plus model converters for OpenSim, SDFormat, OBJ, and OnShape CAD assemblies. Monthly releases follow modified Semantic Versioning introduced in v3.5.0, with the main branch tip potentially unstable. Google Colab tutorials cover basics, procedural model editing, multithreaded rollout, LQR humanoid balancing, nonlinear least-squares, MJX (JAX-based), and differentiable physics for locomotion policy training. Community contributions flow through GitHub Discussions for questions and Issues for bug reports, with a formal contributors and style guide.
PrfaaS (Prefill-as-a-Service) is a cross-datacenter LLM serving architecture that disaggregates prefill and decode operations across loosely coupled clusters. Traditional PD disaggregation requires shared high-bandwidth RDMA networks due to massive KVCache transfers, but newer hybrid-attention models dramatically shrink KVCache size, enabling commodity Ethernet transport. Smaller KVCache alone doesn't resolve real workload challenges: bursty traffic, skewed request lengths, uneven prefix cache distribution, and fluctuating inter-cluster bandwidth. PrfaaS selectively offloads only long-context prefill to compute-dense prefill clusters, pairing model-side KV efficiency with bandwidth-aware scheduling and cache-aware request placement to avoid congestion and poor utilization. This removes the shared low-latency RDMA fabric requirement, enabling independent scaling of prefill and decode capacity. Tested on an internal 1-trillion-parameter hybrid model, PrfaaS achieves 54% higher throughput than homogeneous PD baselines and 32% higher than naive heterogeneous deployments, while consuming only modest cross-datacenter bandwidth.
The safe-gc crate is a Rust GC library with zero unsafe code in both API and implementation, enforced by forbid(unsafe_code). Accessing objects requires indexing into a Heap rather than direct dereference — the key design enabling Rust's borrowing compliance. Two reference types exist: Gc<T> (a Copy non-rooting handle for inner GC references) and Root<T> (non-Copy, rooting, keeps objects alive across collections). Storage uses per-type Vec-backed arenas with index-based free lists. Mark-and-sweep uses per-type mark stacks, recycling unreachable slots into free lists. Rust's Drop works directly as a finalizer since it lacks Heap access and cannot cause use-after-free. Dangling Gc<T> references produce either a panic or an ABA-problem wrong-object access — both memory-safe, not undefined behavior. A copying collector was abandoned because heterogeneous forwarding pointers required simultaneous mutable borrows that Rust's borrow checker disallows.