Claude Code's Routines feature, currently in research preview, lets users create autonomous coding sessions triggered by schedules, API calls, or GitHub events like pull requests and pushes. Routines run as full cloud sessions with no approval prompts, with access to shell commands, MCP connectors, and repositories, and all actions appear under the user's identity. Use cases include backlog grooming, alert triage, automated code review, deploy verification, docs drift detection, and cross-language SDK porting. Routines can be created via web, CLI (/schedule), or Desktop app, and support configurable environments with network access controls, environment variables, and setup scripts. By default, Claude pushes only to claude/-prefixed branches unless unrestricted branch pushes are explicitly enabled. GitHub triggers support 18+ event categories with optional filters for author, labels, draft status, and fork origin, with each matching event spawning a new independent session. Routines are individual-account-scoped, count against daily run allowances, and are not shared with teammates. Organizations can enable metered overage billing to continue running routines past daily caps.
Comments: Users express significant vendor lock-in concerns, preferring to stay near the model layer with easy exit options rather than building on Anthropic's proprietary cloud infrastructure. ToS questions arise over whether routing external bot API calls (e.g., from Telegram) through Routines violates subscription terms. Debate exists around whether recently tightened Claude Code usage limits make Routines practical only for high-tier subscribers. Early users confirm Routines previously ran as "Scheduled" tasks, reporting success automating Slack feedback triage and morning activity digests. Security concerns surface around GitHub-triggered sessions being potential prompt injection vectors. Critics note missing organization-level controls that would let teams share and collaboratively manage routines. Comparisons to OpenClaw (OpenAI's acquired agentic coding tool) are frequent, with many viewing Routines as a direct feature port. Feature fragmentation across CLI, Desktop, and web surfaces is cited as evidence of rapid, inconsistent development. Some commenters call for Anthropic to pause new features and instead address persistent issues like context bloat and MCP connector reliability.
Chicago music fan Aadam Jacobs, 59, has been recording concerts since the 1980s, amassing over 10,000 cassette tapes now at risk of degradation. He partnered with Internet Archive volunteers to digitize the collection, with roughly 2,500 recordings posted online so far. Highlights include a 1989 Nirvana performance predating their mainstream breakthrough, alongside recordings of Sonic Youth, R.E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and numerous punk bands. Volunteer Brian Emerick drives to Jacobs' home monthly, collects tape boxes, and converts them using cassette decks, while other volunteers clean, organize, and label the recordings — even tracking down song names from obscure forgotten acts. Though Jacobs often used basic equipment, audio engineers have substantially improved sound quality during the digitization process.
Comments: Commenters draw on personal bootlegging histories — recording shows on DAT and minidisc, mailing cassettes, and trading performances from bands like Faith No More, Ween, and Fugazi. One account describes how Ian MacKaye of Fugazi responded to a tip about a 1989 Cambridge show on YouTube, ultimately recovering the original tapes for proper archiving. Others note that bands like Ween and Fugazi benefited from allowing recording, generating large catalogs of live content. Copyright and DMCA concerns surface repeatedly, with users questioning legality and worrying about future takedowns. Some suggest musicians should sell live recordings exclusively to ticketholders. Related resources — Relisten.net and the book "Our Band Could Be Your Life" — are recommended for the era's DIY scene. One user proposes training an AI on paired studio/audience recordings to clean up noisy bootlegs. Concerns about Internet Archive's long-term stability and lack of decentralized backup also arise.
The Orange Pi 6 Plus uses the CIX P1 SoC with 12 cores (4×A520 + 8×A720), Mali G720 GPU, 3-core Zhouyi NPU, 16GB RAM, and dual RTL8126 5GbE — solid specs undercut by immature software. The reviewer built a custom Debian 13 Trixie image, fixing a broken GRUB boot path, a failed two-stage filesystem resize, and missing GPU/NPU userspace. GPU Vulkan required rebinding from open-source panthor to vendor mali_kbase; NPU packages were fragmented and inconsistently versioned. For local AI, Qwen3.5 4B Q4_K_M on llama.cpp Vulkan at 9.7 tok/s was the only production-stable setup, limited by a Mali descriptor-set exhaustion bug requiring -ub 8 micro-batch tuning. ik_llama.cpp on CPU outperformed GPU offload for sparse models — Qwen3.5 35B-A3B IQ2_XXS hit 5.24 gen tok/s vs. 1.07 on stock llama.cpp, though with ~40% empty-response failures. Liquid models posted fast Vulkan tok/s but failed in real agent pipelines. CPU benchmarks yielded ~33k 7-Zip with no throttling, peaking at 43°C. Idle power averaged 15.5W — well above RPi5 (~3–4W) or RK3588 (~5–8W). After a month of uptime the board ran stably, hosting a personal assistant and a BasiliskII AArch64 JIT porting project.
Comments: Commenters draw a clear line between hardware quality and software maturity: one notes that despite Orange Pi producing competitive hardware, a prior bad experience with the OrangePi 5 Max — rendered largely useless by inadequate software support, similar to the MangoPi MQ-Pro — has pushed them back to Raspberry Pi for its reliable ecosystem, even at a higher cost-per-spec. Another commenter simply expresses the near-universal impulse to acquire every new SBC that comes to market. A third proposes that a single USB-C 3.2 Gen2 port would be sufficient for most users via a multi-port dock, offloading connectivity cost and keeping heat away from the board enclosure. On the AI side, a commenter raises an underappreciated gap in NPU and quantization benchmarking: while the community focuses on tokens/second and supported formats, objective accuracy benchmarks — a ground-truth comparison of input tokens to output tokens — are rarely published, making it hard to know where quantization resilience ends and model degradation begins.
5NF is traditionally taught via poorly motivated examples—like Wikipedia's traveling salesman—that present a three-column table split into two-column tables under contrived rules that make no business sense. Starting from business requirements and a logical model is a better approach. Two patterns naturally emerge: the AB-BC-AC triangle (the "ice cream" example, with three M:N junction tables for friends' brand and flavor preferences) and the ABC+D star (the "musicians" example, where a fourth anchor—Performance—links Concert, Musician, and Instrument via 1:N relationships). The star pattern raises a choice between synthetic and composite primary keys based on uniqueness requirements. Both patterns can coexist in one schema. The author also distinguishes recording what happened (performances table) from capturing capability or intent (a musician_skills junction table), noting these model different business processes. Building from a logical model using anchors and links yields a fully normalized schema without explicit 5NF reasoning.
Comments: Commenters largely agree that numbered normal forms have limited practical value, functioning better as teaching devices than engineering specifications—once developers internalize 2NF and 3NF violations through real bugs, they recognize partial and transitive dependencies by instinct rather than formal definition. Several note the core insights reduce to "avoid redundancy" and "synthesize non-obvious relationships," with the numbered forms serving mainly as vocabulary scaffolding and exam fodder. One commenter cites Essential Tuple Normal Form as an interesting related concept. Practical frustrations surface around NoSQL and DynamoDB usage eroding normalization instincts, and about real-world scenarios where denormalization is necessary for performance—summarized as "normalize till it hurts, then denormalize till it works." A humorous aside notes workplace normalization discussions often devolve into UUID version debates. The broader consensus is that formal normal form definitions matter less than developing engineering judgment to recognize and eliminate data anomalies in practice.
A California resident invoked CCPA rights to demand Flock Safety delete all data about them, their vehicle, and household members. Flock replied — misspelling the requester's name both times — claiming it operates as a "service provider and processor" for its customers (municipalities and police departments), who own the data, and redirected the requester to contact those organizations instead. Flock argued its license plate readers capture only publicly visible vehicle characteristics, not names or addresses; that data is deleted on a rolling 30-day basis by default; that it cannot sell or independently exploit collected data; and that data is used solely for public safety and crime-solving. The requester believes this response is legally inaccurate under CCPA, arguing that Flock — as the entity actually collecting and processing personally identifiable information — is directly obligated to comply, and that its customers cannot override the requester's rights simply by paying Flock. The requester has not yet retained a lawyer but hasn't ruled it out.
Comments: Commenters are split on Flock's "service provider/processor" defense. Many compare it to Uber's independent contractor classification — structured to dodge obligations — while others liken Flock to a cloud custodian like AWS, arguing deletion requests belong with contracting municipalities. Legal observers note CCPA's definition of "sale" is broad and exchange-of-value arrangements may not shield Flock. Deflock.org is cited as a local-level resource. Flock's own LPR policy permits disclosure to address "privacy issues," which some argue covers this case. A GDPR parallel is offered: threatening EU regulators got Cloudflare to remove cached data within 48 hours. Minnesota users report similar stonewalling under MCDPA. Some argue license plates are "publicly available information" exempt from CCPA; others counter that systematic aggregation changes the calculus. A noted irony: to reliably delete records of a person, Flock would first need to continuously scan for and identify them. The broad consensus is that the gap between privacy law on paper and surveillance-as-a-service in practice remains wide and unresolved.
Google is rolling out "Skills" in Chrome, allowing users to save and reuse AI prompts for Gemini in Chrome with a single click. Prompts can be saved directly from chat history, then invoked via forward slash (/) or the plus (+) button, running against the current page or multiple selected tabs. A pre-built Skills library covers common tasks like recipe ingredient breakdowns, cross-tab product comparisons, gift selection, and document scanning. Skills can be edited and customized at any time, and sync across all signed-in Chrome desktop devices. On the security side, Skills inherit Chrome's existing Gemini safeguards — including confirmation dialogs before sensitive actions like sending email or adding calendar events — along with automated red-teaming and auto-update protections.
Comments: Commenters are broadly skeptical. A frequently cited frustration is the lack of a universal "no emojis, be concise, no unsolicited suggestions" system prompt baked into AI tools — something users wish didn't require manual re-entry. Security concerns are prominent, with users drawing unflattering comparisons to Chrome Extensions' historically weak permission model and calling the feature premature. Some liken Skills to bookmarklets. Prompt management at scale is flagged as the real unsolved problem — users already struggle to find the right prompt across 50+ entries spread across Raycast, Notes, and Notion. Cynicism about Google's motives runs high, with users questioning what Google gains from having pages submitted more frequently. Others see genuine utility in automating form-filling and breaking down data silos, noting that the pre-API web automation that worked in 2014 is now largely broken. A recurring complaint is Google's binary Personalization on/off model, with users wanting granular, read-only access to specific data corpora rather than full account access. Several commenters dismiss the feature outright as useless or a rehash of existing AI platform patterns.
Space toilets have evolved slowly since Apollo, when astronauts used condom sleeves and plastic bags in stench-filled capsules. Without gravity, toilets use air suction to separate waste, require narrow seats for strong airflow, and rely on porous bags sealed in cylinders. Frank Borman held out nine days on Gemini 7; a Shuttle mission saw a frozen urine spike form on the hull, and another had the toilet reverse flow and fill the cabin with freeze-dried fecal particles. Skylab pioneered a workable design tested via zero-G aircraft flights with volunteers who could defecate on cue; the Shuttle added a camera-equipped training mockup for body alignment. The ISS recycles 98% of urine water, but fecal collection remains Skylab-era bag-and-cylinder tech, with persistent odor likely causing chronic crew undereating. Mars missions add new challenges: 700 days of unattended spacecraft causing microbial buildup, uncertain toilet design for 0.38g gravity, and 3–4 tons of biohazardous waste needing 50-year containment. Torrefaction—roasting waste at 200–250°C into odorless char—could also yield radiation-shielding tiles.
Comments: Commenters found the topic unexpectedly fascinating, noting they had never considered the logistics before. The Skylab zero-G aircraft tests—where volunteers had to defecate on cue—answered one user's question about who tests such systems in real conditions. The Shuttle training mockup, where crewmates watched and cheered as colleagues aligned with a camera in the waste tube, drew particular amusement about the unusual crew intimacy required. One commenter raised whether centrifugal-gravity ships would make standard toilets simpler than engineering dedicated space toilets. The torrefaction method sparked a note linking it to ancient traditions of burning dung. A few users flagged the article as a poor lunch read. One commenter summarized key takeaways well: astronauts avoid going as much as possible, the process is unpleasant for crew, odor likely explains chronic undereating, and Borman's 9-day streak on Gemini 7 was remarkable—though the article left unclear whether he made the full 14 days. The consensus was that space sanitation remains a surprisingly under-discussed bottleneck for long-duration spaceflight.
Plain is an opinionated, full-stack Python web framework forked from Django, built to work well for both human developers and AI coding agents. It features typed models backed by PostgreSQL, class-based views, and a Router URL pattern, all with explicit, predictable syntax. The framework ships 30 first-party packages covering auth, background jobs, email, caching, REST APIs, htmx/Tailwind frontend, feature flags, analytics, and more. A key differentiator is built-in agent tooling: always-on guardrails stored in rules files, CLI-accessible documentation, and slash-command skills for installing packages, upgrading, performance tracing, and bug reporting. Developer commands consolidate common tasks — plain dev starts an HTTPS dev server, plain fix runs formatting and linting across Python/CSS/JS, and plain check validates migrations and preflight requirements. The stack is opinionated: Python 3.13+, Postgres, Jinja2, uv/ruff/ty from Astral, oxc/esbuild for JavaScript, and pytest. Plain is developed by the PullApprove team and licensed under BSD-3.
Comments: The central skepticism is whether building a framework "for agents" is self-defeating: since LLMs learn from training data, agents are already fluent in Django and Go, and a less-documented fork gives them less to work with, not more. Some describe Plain as an arbitrary Django reskin with superficial changes, while others dismiss it as itself being AI-generated. A more sympathetic view argues that opinionated frameworks with minimal boilerplate do genuinely reduce lines a human must review, making agentic projects more sustainable — and that Django's conventions already serve this goal well. Django fans acknowledge the fork's appeal given Django's age and bloat, but question whether Plain clears a high enough bar over existing options. Practical questions about how it compares to FastAPI + SQLModel go unanswered. The consensus is cautious: the underlying thesis — that tight, opinionated defaults help both humans and agents — has merit, but Plain's execution as a lightly forked Django hasn't yet convinced developers it is the right vehicle.
OpenSSL 4.0.0 is a major feature release with significant breaking changes and new capabilities. On the removal side, it drops support for SSLv2 Client Hello, SSLv3 (deprecated since 2015, disabled by default since 1.1.0), the legacy engine subsystem, deprecated elliptic curves and explicit EC curves (now disabled at compile-time by default), the c_rehash script, and several deprecated API functions including fixed SSL/TLS version methods and custom EVP methods. Behavioral changes include making ASN1_STRING opaque, adding const qualifiers to many X509-related API signatures, enforcing PKCS5_PBKDF2_HMAC lower bounds under FIPS, augmenting CRL verification, and changing OPENSSL_cleanup() to run in a global destructor rather than via atexit(). On the new features side, 4.0.0 adds Encrypted Client Hello (ECH, RFC 9849), post-quantum support via SM2MLKEM768, RFC 8998 SM2/SM3 signatures, cSHAKE per SP 800-185, ML-DSA-MU digest support, SNMP and SRTP KDFs, deferred FIPS self-tests, negotiated FFDHE key exchange in TLS 1.2 (RFC 7919), and flexible Windows VC runtime linkage.
Comments: Users largely welcome the release, with the most enthusiastic reaction going to Encrypted Client Hello support. The removal of the engine subsystem is noted as the only meaningfully disruptive change, though users report that downstream distributions like Fedora have already addressed most engine-dependent packages. The migration from 3.x to 4.0 is described as considerably smoother than the 2.x-to-3.x transition, which was widely remembered as painful. Some users wonder whether the new features — particularly post-quantum support — justify upgrading from 3.5.x. There is also background discussion on OpenSSL's organizational health since the Heartbleed incident, with general sentiment that the project is now on much firmer footing. A linked HAProxy blog post is cited as recommending against older SSL stack versions. One commenter humorously references a pending "Suckerpinch" video in connection with the release.
California's A.B. 2047 requires all 3D printers to implement state-certified print-blocking algorithms detecting firearm component designs, while making it a misdemeanor to disable these systems — effectively criminalizing open-source firmware. Critics argue this mirrors failed DRM approaches, enabling manufacturers to lock users into proprietary ecosystems, mandate first-party consumables, and force planned obsolescence. The California DOJ would certify algorithms, maintain banned-blueprint databases, and approve compliant printers — a burden critics say will be outpaced by workarounds. Smaller manufacturers face disproportionate compliance costs, raising market barriers, while secondhand resale risks misdemeanor penalties. Print-blocking will likely require cloud connectivity, creating surveillance risks, and the infrastructure could expand to block copyrighted or political content globally. Critics note 3D-printed guns are already rare and illegal, and simpler methods of making unregistered firearms remain entirely unregulated.
Comments: Commenters broadly view A.B. 2047 as technically unenforceable and suspect incumbent printer manufacturer lobbying as the real driver over genuine safety concerns. Many note that more reliable improvised-firearm methods like metal pipes and CNC machining go unregulated, making 3D printers an illogical target. Technical objections center on evading detection by printing parts in fragments, sourcing printers internationally, or assembling an unlicensed printer — how the hobbyist industry itself began. Comparisons to regulating CNC machines, saws, or pens illustrate the logical absurdity. Some note 2D printer tracking dots and currency-copy restrictions set precedent regulators may invoke. Several ask why ammunition or gunpowder access isn't targeted instead, since a plastic firearm is useless without bullets. Scope creep concerns are prominent — commenters warn the database could expand to Disney IP or political speech. The legislation is widely characterized as performative policymaking by lawmakers with little technical understanding of 3D printing.
Fiverr uses Cloudinary for PDF/image processing in its messaging platform, but opted for public rather than signed/expiring URLs — meaning sensitive files exchanged between clients and workers are publicly accessible. Cloudinary supports signed URLs (similar to S3's access controls), making this a configuration choice, not a platform limitation. Worse, public HTML appears to link these files, causing hundreds of documents — including IRS Form 1040s and other financial records containing PII — to be indexed by Google. The researcher demonstrated this with a simple site search query. Fiverr simultaneously runs Google Ads targeting tax-filing keywords, meaning it actively attracts users to a service that then exposes their sensitive financial documents, potentially causing tax preparers to violate the GLBA and FTC Safeguards Rule. The researcher followed responsible disclosure by emailing security@fiverr.com 40 days prior; Fiverr's security team never responded. The issue was made public because it falls outside typical CVE/CERT scope as a misconfiguration rather than a code vulnerability.
Comments: Users confirm the researcher followed proper disclosure procedures per Fiverr's security.txt and help pages, which direct reports to security@fiverr.com and reference a BugCrowd bug bounty program. Some suggest reporting to Cloudinary's own BugCrowd engagement as an alternative avenue, though they note it may be out of scope. Several users express surprise the story isn't receiving more attention given the severity — leaking and Google-indexing Form 1040s is considered egregious. Users who searched the exposed files found a wide range of sensitive financial and personal documents publicly accessible. One user also discovered an unrelated unpublished book manuscript among the indexed files, underscoring the breadth of exposed content. General reaction is disbelief at both the scale of the exposure and Fiverr's silence.
LangAlpha is an open-source AI agent harness for investment research, built on React 19 + FastAPI + PostgreSQL + Redis. It models investing as a Bayesian iterative process via persistent sandboxed workspaces where research compounds across sessions via an agent.md memory file injected into every LLM call. Its core innovation, Programmatic Tool Calling (PTC), has the agent write and execute Python in a Daytona cloud sandbox rather than dumping raw API data into context, enabling multi-year analysis while reducing token waste. A three-tier data hierarchy spans a hosted real-time WebSocket feed, FMP for fundamentals, and free Yahoo Finance as a fallback. A 24-layer middleware stack handles skill loading, auto-summarization, live steering, plan mode with human-in-the-loop approval, and credential leak detection with pgcrypto encryption. The agent ships with 23 pre-built research skills covering DCF models, earnings analysis, and document generation. Parallel async subagents run concurrently with checkpoint-based resume and live monitoring. Price-triggered and cron-based automations, Slack/Discord integrations, and a TradingView-powered web UI complete the platform.
Comments: Commenters validate core architectural decisions while raising pointed concerns. An OSS market data SDK maintainer confirms naive MCP calls dumping raw data into context are painful at scale, describing evolution toward Parquet-file caching queryable via DuckDB — an approach they suggest would pair well with LangAlpha. The session persistence angle resonates broadly; users note all AI tools suffer from session-bounded thinking, treating single deliverables as endpoints rather than supporting iterative multi-session workflows. One commenter critiques modern AI tools for inheriting "mobile app thinking" that prioritizes sessions over file-centric, user-controlled interoperability. Skeptics question whether the orchestration produces actionable, correct data, asking for validation beyond architecture screenshots. One commenter warns LLMs "lie and cheat" with financial data, claiming 75% of their agent work is dedicated to bug-squashing LLM dishonesty. Others debate whether investing is truly Bayesian versus structure-driven. Enthusiasm is expressed about the "second brain for investing" framing, while some note the demo was inaccessible without account creation.
Starting Feb 24, 2026, all email to Microsoft-hosted addresses (Hotmail, Live, MSN, Outlook) sent via Sendgrid dedicated IPs was deferred with error 451 4.7.650 citing "IP reputation," despite 99% Sendgrid reputation, clean Gmail Postmaster metrics, and proper SPF/DKIM/DMARC setup. Microsoft's SNDS tool showed complaint rates below 0.1% with no red flags. A SQL query revealed a spike of 53 emails to Microsoft addresses in a single minute on Feb 23—just before the incident—likely triggering Microsoft's anti-spam heuristics for "spiky" traffic. Because transactional and bulk emails shared the same two dedicated IPs, users couldn't receive login, billing, or password-reset emails. After submitting a support ticket and escalating, a human at Microsoft replied that throttling had been adjusted; Sendgrid resumed delivery within 72 hours with no emails permanently dropped. As a fix, a Redis sliding-window rate limiter was implemented capping sends at 10 emails/minute per IP to Microsoft addresses. The sender notes Sendgrid should throttle at the IP level by default, and that separating transactional IPs requires ~100k/month volume to stay warm.
Comments: Users with similar experiences describe Microsoft's blocking as opaque and frustrating—SNDS showing all-green status has no apparent bearing on whether IPs get blocked, and the "temporary" rate-limit language masks what can be an indefinite ban. Microsoft support is criticized for auto-responses claiming no issue is detected, links to 15-year-old best-practice guides, and refusal to explain blocks even on long-established static IPs tied to known businesses. The blocking is attributed to heuristic filters reacting to vague anomalies—too much mail, too little, or erratic rates—conditions describing normal operation for most legitimate mail servers. Some users ultimately migrated bulk sending to Amazon SES as a pragmatic solution, viewing the ongoing battle as not worth the operational cost.
ClawRun is an open-source hosting and lifecycle management layer for AI agents that deploys them into secure sandboxes, currently supporting Vercel Sandbox with more providers planned. A single npx clawrun deploy command walks users through choosing an LLM provider and model, configuring messaging channels (Telegram, Discord, Slack, WhatsApp, and more), setting cost limits and network policy, and deploying to a chosen provider. Deployed agents persist in sandboxes that sleep when idle and wake automatically on incoming messages, helping manage costs. The platform includes both a web dashboard and CLI for real-time chat and management, with built-in cost tracking and budget enforcement across all connected channels. Its pluggable architecture allows swapping agents, sandbox providers, and messaging channels independently, and the project is released under the Apache-2.0 license.
Comments: Commenters raise two related concerns about agentic AI deployment in practice. One developer describes building guardrails, observability, and security layers around an AI agent platform for a client, finding the experience deeply frustrating due to unpredictable outputs, rate limits, service interruptions, cron jobs silently disabling themselves, and permissions that fail to persist. They note that while end-users extract some value, trust in day-to-day reliability is fundamentally eroded compared to using LLM APIs directly — where behavior is far more consistent. A second commenter observes a broader structural pattern: agent deployment tools and hosting platforms appear to generate meaningful revenue primarily for the companies running those platforms, not for the developers or businesses deploying agents on top of them — drawing a parallel to how AI tutorial sellers profit more than the students who purchase the courses.
Backblaze quietly stopped backing up OneDrive, Dropbox, Google Drive, and .git directories, burying the change in release notes under "Improvements" with no direct user notification. This reverses their original promise of "no restrictions on file type or size." The technical rationale for cloud folder exclusions is that OneDrive/Dropbox use "files on demand" — local entries are often just stubs, so backup software would have to download terabytes before re-uploading, exhausting disk space. Still, critics note this leaves users exposed to ransomware, accidental deletion, and cloud account bans — exactly what Backblaze was supposed to guard against. The .git exclusion is separately indefensible: local-only repos have no other backup. Exclusion rules live in a mandatory XML file users cannot override via UI, and the current exclusions page doesn't mention Dropbox, OneDrive, or git. Backblaze previously published blog posts arguing Dropbox alone is insufficient and Backblaze is needed as an additional layer — now contradicted by their own product.
Comments: Users are canceling subscriptions en masse, citing broken trust after years of reliance. The OneDrive/Dropbox exclusion gets grudging technical sympathy — files-on-demand stubs would force downloading 1TB+ before re-uploading, wrecking disk space and performance — but the silent rollout with no notification or UI visibility is universally condemned. The .git exclusion draws sharper criticism since local-only repos have no remote fallback. Some clarify .git folders are actually present in backups but hidden by default in the restore UI, requiring "show hidden files" to be toggled, and that restoring a parent folder silently omits hidden subdirectories without that toggle. Backblaze's prior silent dropping of VeraCrypt support is cited as a pattern of eroding trust. Recommended alternatives include restic+B2, Arq, Duplicati, IDrive E2, rsync.net, and Hetzner storageboxes. Several note that force-pushed git history is often recoverable via git reflog or GitHub's API, though this doesn't excuse the backup failure.
Mouse is an interpreted, stack-oriented language designed by Peter Grogono around 1975 for resource-constrained microcomputers, similar to Forth but simpler, using single-character instructions and relying more on variables than stack manipulation. Published in Byte Magazine in July 1979, Grogono argued Mouse could demonstrate arrays, functions, procedures, nested control structures, local variables, recursion, and parameter passing without the resources typical high-level languages demand. Instructions use symbols: ()for loops, [] for conditionals, ^ to exit loops, $ to end programs or define macros, and # to call macros. Variables are letter-based memory addresses — lowercase inside macros are local, uppercase are global; outside macros, lowercase behaves as global. Macros support parameters via %, delimited by , and ;, and recursive macros are fully supported. The CP/M version, only 2KB, was updated by Lee R. Bradley based on the Z80 version from the 1983 book "Mouse, a language for Microcomputers" and is available from the Walnut Creek CD, including sample programs like FILES.MSE and HELP.MSE.
Comments: Commenters note this topic was previously discussed on Hacker News in August 2024. Peter Grogono's broader legacy is highlighted — he is also the author of a well-regarded book on Pascal, available via the Internet Archive. Reactions to the language itself are brief but positive, with users appreciating its elegance and manageable operational complexity.
Jujutsu (jj) is a distributed version control system (DVCS) designed to be simultaneously simpler, easier, and more powerful than git by synthesizing the best of git and Mercurial (hg). Its core thesis is that a smaller set of composable primitives can replace git's sprawling command set without sacrificing capability. One of jj's key practical advantages is its git-compatible backend, meaning developers can adopt it unilaterally on existing git repositories without requiring teammates to switch — and can revert to pure git with no history loss. Notable features include jj absorb (automatically routing uncommitted changes to the most relevant prior commit), first-class support for non-linear DAG workflows and stacked PRs, merge conflicts treated as persistent state rather than blockers, and a reliable undo system. A significant behavioral difference from git is that file edits are automatically included in the current commit, eliminating the staging area but requiring users to defensively create empty commits before exploratory changes.
Comments: Community reception is sharply divided. Enthusiasts praise jj absorb, stacked PR workflows, and DAG-based development as genuine improvements over git's interactive rebases and merge conflict handling, with some calling it a "mental health saver" for complex multi-PR work. Critics who abandoned jj cite the auto-commit behavior as a footgun — editing files on an old checkout silently rewrites history — and flag that GitHub PR collaboration feels awkward, with no native jj remote concept requiring manual bookmark/push/pull steps that negate many advantages. Practical concerns include difficulty purging accidentally committed secrets and immature GUI tooling. Skeptics question whether jj offers a dramatic enough improvement to overcome git's network effects, comparing it to Mercurial's failure despite being arguably superior. Some argue that AST/semantic diffing — not workflow ergonomics — would be the actual 10x improvement needed to displace git. The git-compatible backend is widely cited as the main reason to experiment risk-free.
YouTube surpassed Disney's media business in 2025 with an estimated $62 billion in revenue versus Disney's $60.9 billion (excluding experiences), making it the world's largest media company. MoffettNathanson values YouTube at $500–$560 billion, far ahead of Netflix's ~$409 billion. Ad revenue exceeded $40 billion for the year, while YouTube Premium, YouTube Music, NFL Sunday Ticket, and YouTube TV form a massive subscription business; YouTube TV's ~10 million subscribers put it on track to overtake Charter and Comcast. The platform has paid out over $100 billion to creators and media partners. CEO Neal Mohan frames the mission as helping creators build audiences and businesses globally. Heavy AI investment is expected to accelerate content production, with top creators already using AI for set design, costumes, and visual effects. MoffettNathanson sees YouTube as uniquely positioned at the media-technology intersection, likely to benefit from structural tailwinds and headwinds, while Netflix remains the only other platform still accelerating alongside it.
Comments: Users celebrate YouTube's unmatched versatility — from machining tutorials and historical footage to niche music — calling it one of humanity's greatest resources and superior to lowest-common-denominator broadcast media. Many dispute the Disney revenue comparison, arguing YouTube's platform-and-network-effects model and Disney's IP-curation model are fundamentally different businesses. Frustration with the recommendation algorithm is widespread: users report repetitive content, clickbait thumbnails, and quality declines over recent years. Many rely on third-party tools — ReVanced, Invidious, and browser extensions — to suppress ads and Shorts, even while paying for Premium. The lack of real competition and rising Premium prices concern users. Ethical criticism targets addictive short-form content pushed to young users and potential ad revenue tied to bots and misinformation. On the creator side, users note that long-form YouTube presence is the most effective distribution channel, and the platform's allowance of independent monetization via Patreon and sponsorships is seen as a key differentiator from traditional media.
A research group has published a collection of free zines on distributed systems topics, available to download, print, and distribute. "Carol's Causal Conundrum," published April 2026 with student collaborator Ayush Manocha, introduces causally ordered message delivery — explaining what it is, what problem it solves, and two classic plus one novel implementation approach. "Communicating Chorrectly with a Choreography," published December 2024 with student Ali Ali, covers choreographic programming, where a single unified program describes all participants in a message-passing system and their interactions. A third zine, "Fighting Faults in Distributed Systems," was created by Ali Ali for an undergraduate distributed systems course. The group's lead has written a blog post on using zine creation as an optional assignment, an approach borrowed from Cynthia Taylor at Oberlin College and later adopted at the University of Minnesota Duluth. An NSF CAREER grant and REU supplemental funding supported paid student collaboration, enabling the choreography zine's creation.
Comments: Commenters express surprise at the sparse HN discussion, with one praising the zines as a delightful, human-crafted (no AI) way to present complex CS topics through detailed explanations and illustrations. They speculate that niche subject matter — causal ordering, choreographic programming, distributed fault tolerance — may dampen engagement, coining "TL;DP" (too long, doesn't prompt) as a wry explanation. Separately, another commenter raises a directly related real-world correctness concern: most home automation software using MQTT spreads messages across semantically meaningful topics, but the MQTT spec does not guarantee ordering across topics, creating causal ordering violations even with just two communicating devices — a subtle but significant distributed systems bug that the zines' subject matter speaks to directly.
YantrikDB is a Rust-based cognitive memory engine addressing quality degradation that occurs in plain vector databases beyond ~5,000 stored memories. Unlike Pinecone, Weaviate, or LangChain memory wrappers, it adds temporal decay with configurable half-life, memory consolidation via a think() call, factual contradiction detection, entity graphs, and multi-signal recall scoring combining semantic similarity, recency, importance, and graph proximity. It ships as an embeddable single-file library (SQLite-style), networked HTTP server, and MCP plugin, backed by CRDT replication, AES-256-GCM encryption, per-tenant quotas, and Prometheus metrics. Benchmarks on a 2-core cluster show recall p50 at 112ms (embedding-dominated), dropping to ~5ms with pre-computed embeddings. Token efficiency comparisons show file-based memory exceeding 32K context at 500 memories, while YantrikDB stays at ~70 tokens per query with precision improving at scale. The project is v0.5.11 hardened alpha, backed by 1,178 core tests, chaos testing, and cargo-fuzz. Licensed AGPL-3.0; MCP server is MIT.
Comments: Commenters raise pointed concerns about contradiction detection: "CEO is Alice" vs. "CEO is Bob" may not be a contradiction — co-CEOs, subsidiaries, temporal succession, or different companies can all make both statements valid simultaneously, and without broader context the system cannot reliably distinguish these. A user who extensively tested the similar Mem0 system reports abandoning fact-extraction entirely after irony and sarcasm produced nonsensical inferences, subtlety was lost, and even a few wrong injected facts derailed agents — preferring prose summarization instead. Others ask whether real-world benefits have been benchmarked and how it compares to supermemory.ai. One commenter notes that with million-token context windows emerging, leaning on LLM semantic mapping could dramatically simplify the schema. The consolidation mechanism prompts curiosity about whether it uses random sampling plus an LLM merge step. The author built it to solve personal ChromaDB degradation at ~5k memories and explicitly asks whether it solves a real shared pain point or is over-engineered for a narrow use case.
Introspective Diffusion Language Model (I-DLM) addresses a core weakness of diffusion language models (DLMs): they lag behind autoregressive (AR) models due to "introspective inconsistency." I-DLM converts pretrained AR models (Qwen3-8B/32B) using causal attention, logit shift, and an all-masked training objective, then applies Introspective Strided Decoding (ISD) — generating N new tokens per forward pass while verifying prior tokens using a p/q acceptance criterion. I-DLM-8B is the first DLM to match same-scale AR quality, outperforming LLaDA-2.1-mini (16B) by +26 on AIME-24 and +15 on LiveCodeBench-v6 using half the parameters; I-DLM-32B surpasses LLaDA-2.1-flash (100B) across benchmarks. Throughput gains of 2.9–4.1x at high concurrency stem from compute efficiency (TPF²/query_size ≈ 1.22) exceeding AR, unlike SDAR which plateaus at 0.31. A lossless R-ISD variant uses a gated LoRA adapter (rank=128) active only at MASK positions, guaranteeing bit-for-bit identical output to the base AR model at ~1.12x overhead. I-DLM integrates directly into SGLang via standard causal attention with no custom infrastructure, achieving 2.1–2.5x throughput over naive baselines.
Comments: Users find the approach compelling: converting a Qwen AR model into a diffusion model while remaining competitive with the base — plus significant throughput gains — is seen as a meaningful result. The lossless R-ISD variant draws particular interest; by grounding the diffuser against the base model's distribution via gated LoRA, it produces byte-for-byte identical output at roughly twice the speed, with further gains expected for batched workloads. Some question whether this qualifies as "true" diffusion, since ISD uses previously generated context to produce next token blocks rather than generating all output simultaneously. Practical deployment questions arise around tooling: users ask whether vLLM supports these models or if migration to SGLang is necessary. One user asks whether diffusion models could support iterative reasoning — generating a block, introspecting, then refining — suggesting interest in combining ISD with chain-of-thought loops. A question about whether throughput gains shift the bottleneck from memory bandwidth to compute goes unanswered. Models were released April 12, 2025 and are available on HuggingFace, including I-DLM-8B, I-DLM-32B, and I-DLM-8B-LoRA.
Most apps, even the most valuable ones, revolve around one or two core concepts — dubbed "nucleus nouns" — with all other entities acting as satellites. Understanding which nouns carry the most "gravity" helps quickly grasp what an app does, cutting through marketing language. Companies that internalize their nucleus nouns can embed them in branding, API docs, and hiring. When teams scope new projects, they should flag whether they're introducing satellite or nucleus nouns — the latter warrants serious executive attention, as seen in Figma CEO Dylan Fields' cautious approach to launching FigJam nine years post-founding. In the era of AI-assisted vibecoding, being mediocre across many features is no longer defensible, while deep mastery of a narrow noun-space — as seen in Resend (email) and Plaid (bank linking) — remains a durable competitive moat.
Comments: Commenters connect "nucleus nouns" to domain-driven design entities, noting the idea is intuitive to those with DDD or database backgrounds. Several expand on the framework's limits in isolation: one argues real value emerges from mapping how nouns relate — a "payment intent" gains meaning only through relationships to customers, invoices, and currencies. On the SaaSpocalypse angle, one commenter with extensive B2B experience warns that vibecoding already enables customers to self-build missing features, and once that habit forms, full SaaS replacement becomes realistic. Others suggest the framing overlaps with identifying key user stories, with implementation details gaining weight as user expectations solidify. A recurring frustration is companies inventing clever proprietary names for core nouns, forcing users to learn non-intuitive terminology. Commenters find the framing novel in presentation even if the underlying concept is familiar.
DaVinci Resolve's new Photo page brings professional video color grading tools to still photography, offering a node-based workflow with native RAW support for Canon, Fujifilm, Nikon, Sony, and iPhone ProRAW at resolutions up to 32K (400+ megapixels). Primary color correction, curves, qualifiers, Power Windows, HDR color wheels, and professional scopes (parade, waveform, vectorscope, histogram) are all included. The AI toolkit covers Magic Mask, Depth Map, Relight FX, Face Refinement, AI SuperScale (4x upscaling), UltraNR neural denoising, Patch Replacer, and Film Look Creator for cinematic grain and halation effects. Tethered shooting with Canon and Sony cameras allows simultaneous live capture and color grading. Library management supports import from Apple Photos and Lightroom, AI IntelliSearch, EXIF-based album organization, and Blackmagic Cloud for real-time remote collaboration. GPU acceleration across Metal/Apple Silicon, CUDA, and OpenCL enables fast batch RAW processing. Quick Export delivers JPEG, PNG, HEIF, and TIFF with preserved EXIF metadata. The base version is free; DaVinci Resolve Studio costs $295.
Comments: Users broadly welcome the release as an escape from Adobe's subscription model, citing frustration with Lightroom's stagnation, Capture One's slowness, and Affinity's recent changes. Many photographers had already been loading RAW files into DaVinci via manual workarounds, so the Photo page feels like a long-awaited formalization. Linux users are cautiously optimistic but report persistent codec and ALSA audio issues on Ubuntu/Kubuntu 24.04, even after purchasing Studio licenses. Early testers from Lightroom backgrounds find the interface unintuitive—video editing software with photo tools added—with masking workflows not immediately discoverable. Key open questions include what the $295 Studio license adds for photos specifically, JPEG support, lens correction database breadth, self-hosting of Blackmagic Cloud, and whether the Photo page can run standalone. Blackmagic's free-software/hardware-revenue model is cited as enabling competitive pricing. High-volume shooters (e.g., Sony A9 III at 120fps RAW) are keen to test large-collection performance, while users note open-source tools like Darktable have historically led photo editing innovation despite UX shortcomings.
Google announced it will classify "back button hijacking" as an explicit violation of its malicious practices spam policy, with enforcement beginning June 15, 2026. Back button hijacking occurs when a site interferes with browser navigation so that clicking Back sends users to pages they never visited, presents unsolicited ads, or traps them in redirect loops. Google cites rising prevalence and user reports of feeling manipulated as justification. Sites caught doing this face manual spam actions or automated search ranking demotions. Google advises site owners to audit all scripts, third-party libraries, and ad platform integrations that manipulate browser history, noting hijacking may originate from included advertising code. Sites hit by manual actions can submit reconsideration requests via Search Console after fixing the issue. The two-month advance notice gives owners time to remediate before enforcement begins.
Comments: Users broadly welcome the policy while naming specific offenders: LinkedIn (whose technique of using location.replace() plus pushState to fabricate fake browsing history is dissected), TikTok, Reddit, Instagram, YouTube, Amazon, eBay, and Microsoft support pages. Many argue the fix should happen at the browser level rather than via search penalties, and some note Chrome's long-press back button already lets users skip hijacked entries. Skepticism about enforcement is strong given Google's own products exhibit similar behavior. Commenters question whether legitimate SPAs, WebGL games, and map apps using pushState will be incorrectly penalized. Several call for extending the policy to scroll and Ctrl+F hijacking. A Firefox about:config toggle (browser.history.allowPushState) is noted as a client-side workaround. The irony of Google — whose Chrome and Android platform have their own dark patterns — lecturing others on UX is a recurring theme, with some arguing history manipulation APIs should never have been exposed to web pages at all.
Tool calling in open-source LLMs is fragmented because each model family encodes function calls in its own incompatible wire format—GPT-OSS uses channel/constrain tokens, DeepSeek uses Unicode delimiters, GLM5 uses XML-like arg_key/arg_value pairs—requiring every inference engine (vLLM, SGLang, TensorRT-LLM, transformers) to write custom parsers per model. Gemma 4 illustrates the problem: reasoning tokens leak into tool-call arguments, decoders strip special tokens before parsers see them, and llama.cpp had to abandon its generic autoparser entirely. Generic parsers fail because wire formats are training-time decisions with no shared constraints. Grammar engines (Outlines, XGrammar) and output parsers need identical format knowledge but are built by different teams, each reverse-engineering the same model specs independently—producing N×M duplicate implementations. The author proposes a declarative configuration spec per model, mirroring how Hugging Face standardized chat templates, so grammar engines and parsers consume one shared description rather than embedding format logic in code.
Comments: Commenters broadly agree this is a genuine problem, with practitioners noting real pain points like wrestling with GLM's format at runtime. A key insight raised is that wire format choices are baked in at training time, meaning standardization efforts face a multi-year lag before adoption—and big labs have little incentive to standardize given proprietary formats create lock-in. Some users push back on severity, arguing a community parser library could handle multiple formats without difficulty, framing it as more of a social than technical problem. On MCP as a solution, commenters clarify it only standardizes the agent-to-tool communication layer, not the model's internal emission format, so N×M fragmentation persists underneath—and even MCP clients vary in how they handle required params and response truncation. One proposal suggests an engram-layer approach, encoding tool-call semantics in a swappable model layer injected at load time. Inference servers like vLLM do translate outputs to OpenAI-compatible shapes, but commenters note this translation is itself the parser work the article describes.
An experiment applied Claude AI-directed fuzzing (AFL++, AddressSanitizer, Valgrind, UBSan) against lean-zip, a Lean-verified zlib whose core theorem guarantees round-trip correctness for any byte array under 1 GiB. Over 105 million executions across 16 parallel fuzzers, zero bugs appeared in verified application code. Two bugs emerged outside the verification boundary: a heap buffer overflow in Lean 4's C++ runtime lean_alloc_sarray, triggered by a 156-byte ZIP64 file with compressedSize of 0xFFFFFFFFFFFFFFFF causing integer wraparound that allocates ~23 bytes while reading SIZE_MAX bytes (affects all Lean 4 versions through v4.31.0-nightly, fix PR pending); and a denial-of-service in the unverified Archive.lean parser, which passes ZIP header sizes directly to allocation without bounds-checking, causing OOM panic. The archive parser had zero theorems, mirroring CompCert's verified-passes/unverified-frontend split. The runtime bug doesn't invalidate Lean proofs mathematically but affects all compiled Lean programs allocating ByteArrays. Verification proved robust precisely where applied.
Comments: Comments widely note both bugs fell entirely outside the verified code's scope — the proven compression pipeline was clean, making the title misleading. Users highlight the core limitation: formal proofs are only as strong as their specifications, and an incomplete spec yields verified code that does the wrong thing. The runtime bug lives in the trusted computing base, meaning it doesn't invalidate Lean proofs mathematically but does affect compiled programs — a distinction commenters carefully draw. Knuth's quip ("I have only proved it correct, not tried it") is cited approvingly. The genuinely significant finding — an AI-directed fuzzing campaign discovered a heap overflow in Lean's own unproven runtime — is noted positively. The archive DoS illustrates the spec-completeness problem: decomp(comp(x)) = x says nothing about decomp on arbitrary untrusted input. Others frame the overall result as a win for formal verification since the verified portion was entirely clean across 105 million executions. Minority threads raise the halting problem and the impossibility of complete correctness proofs.
Kontext CLI is an open-source Go binary that manages credentials and governance for AI coding agents, currently supporting Claude Code. Instead of storing long-lived API keys in .env files, developers commit a .env.kontext file with placeholders like {{kontext:github}} or {{kontext:stripe}}. Running kontext start --agent claude triggers OIDC authentication (refresh token stored in system keyring), exchanges placeholders for short-lived tokens via RFC 8693 token exchange, injects them as environment variables, and spawns the agent. A lightweight Unix socket sidecar streams PreToolUse, PostToolUse, and UserPromptSubmit hook events to a backend dashboard for audit and governance. Sessions and credentials expire automatically on exit. The backend communicates via ConnectRPC; the CLI never captures LLM reasoning, token usage, or conversation history. Security features include AES-256-GCM encryption at rest, OIDC-based auth, and system keyring storage. Cursor and Codex support are planned but not yet shipped.
Comments: Commenters are broadly enthusiastic but raise pointed security and business-model questions. Several ask whether the OSS CLI is merely a client for a paid backend service and who actually custodies the credentials. A key unresolved concern is that injecting static API keys into the agent's environment doesn't prevent the agent from reading or leaking those keys, since environment variables are accessible to the running process. A related process-isolation question asks whether an attacker running as the same OS user could inspect kontext's memory to extract keys — a limitation that would undermine the security model. Some note that the ideal keychain design never returns the raw secret but instead mints a new token or signs the request, and ask whether services without OIDC support will be accommodated. Comparisons are drawn to Tailscale Aperture and OneCLI as similar prior art. One commenter proposes eBPF as an alternative: intercepting network I/O and rewriting requests with proper tokens transparently, avoiding an extra abstraction layer entirely. Others simply express that the tooling fills a real gap they were actively trying to solve themselves.