Est. 2025 · Daily Edition
The Digest
Signal, entropy & the morning brief
TUESDAY, MARCH 11, 2026 · No. 70
Lead Story
A single demonstration is all it takes. DeepMind's latest imitation learning system watches a person fold a paper crane, then replicates the full sequence with sub-millimeter precision on a bimanual robot. The trick: a diffusion-based policy that reasons about contact geometry rather than pixel-matching.
↗ One-shot manipulation learning collapses the data bottleneck that has kept dexterous robots in the lab. Production lines with daily changeovers just got a lot more interesting.
ArXiv Robotics
Today's Brief
Anthropic quietly shipped recursive tool use in Claude 4, letting the model compose multi-step API calls, inspect intermediate results, and self-correct without human scaffolding. Early benchmarks show 40% fewer failed tool invocations on SWE-bench.
↗ Self-correcting tool use is the missing piece for reliable AI agents. This moves Claude from 'assistant' to 'junior engineer' territory.
AI News
Researchers at EPFL built a dynamic typesetting engine that tracks eye movements via webcam and adjusts line length, leading, and word spacing in real time. Comprehension scores jumped 12% in a 200-person study — without readers noticing any changes.
↗ Responsive typography has been a design fantasy for decades. Eye-tracking hardware is now commodity-grade enough to make it real.
Dezeen
The team behind liquid neural networks published a compression technique that shrinks their continuous-time models by an order of magnitude while preserving accuracy on time-series tasks. The result fits on a microcontroller.
↗ Tiny, adaptive models on edge hardware open the door for real-time robot control without cloud latency. Watch for embedded LNN chips.
MIT Tech Review
A team at ETH Zurich fabricated a 3D-printed acoustic metamaterial that routes sound waves through 90-degree turns with near-zero loss. The lattice structure was designed by a topology-optimization algorithm that ran for three weeks.
↗ Acoustic metamaterials are quietly enabling things architects have wanted for centuries: spaces that are both open and silent.
IEEE Spectrum
A provocative post arguing that retrieval-augmented generation is being superseded by long-context models hit #1 on HN. The counterarguments are better: RAG still wins on cost, freshness, and auditability. But the window is closing.
↗ The RAG vs. long-context debate is really about where you put the complexity: in the pipeline or in the model. Both will coexist, but the defaults are shifting.
Hacker News
ISRO released full CAD files and firmware for a modular 50kg satellite bus under an Apache 2.0 license. Three university teams have already forked the repo and are planning launches for Q3.
↗ Open-source hardware for space lowers the barrier to orbit from 'nation-state' to 'well-funded lab.' The Cambrian explosion in LEO continues.
BBC News
Field: Robotics & Autonomous Systems
The hydraulic Atlas is officially retired. Its electric successor demonstrated parkour, object manipulation, and a backflip in the same run — all at 48 dB. The power density of the new actuators is the real story.
→ Electric humanoids that don't sound like construction equipment can finally share space with humans. Noise was an underrated barrier to deployment.
The Robot Report
A jellyfish-inspired soft robot from Virginia Tech converts wave energy into electrical power through triboelectric nanogenerators embedded in its silicone bell. It operated autonomously for 72 hours in Monterey Bay.
→ Self-powered ocean robots could monitor marine ecosystems indefinitely. The energy-harvesting soft body is the platform, not just the shell.
Robohub
Columbia's new e-skin uses a 1,024-taxel array with sub-millimeter spatial resolution. In blind tests, their robot hand matched human performance in distinguishing 30 fabric types by touch alone.
→ Texture discrimination closes the gap between robot grasping and robot understanding. Sorting, quality inspection, and surgical palpation all benefit.
ArXiv Robotics
A new paper from ETH Zurich shows that a sim-to-real RL policy outperforms hand-tuned MPC on ANYmal across 15 terrain types, including loose gravel and wet ice. Training took 4 hours on a single GPU.
→ When RL is cheaper, faster, and more robust than classical control, the last argument for hand-engineered locomotion evaporates.
ArXiv Robotics
Field Dispatch
Today: history
Reanalysis of a carved bone fragment from a French cave shows it maps the positions of major stars as they appeared 14,000 years ago. The carver accounted for precession — suggesting deep astronomical knowledge in the Upper Paleolithic.
↗ Systematic astronomy predates agriculture by millennia. Our timeline for 'when humans started thinking in systems' keeps getting pushed back.
Physarum polycephalum, given food sources arranged like major cities, consistently finds near-optimal Steiner tree solutions in hours. A new mathematical proof shows its chemical gradient system is equivalent to a novel class of optimization algorithms.
↗ Biology keeps embarrassing computer science. The slime mold's approach maps onto a class of problems that includes chip layout and fiber optic routing.
Acoustic archaeologists measured the resonant frequencies of 12 Gothic cathedrals and found they cluster around 110 Hz — the same frequency that induces altered states in neuroimaging studies. The builders knew exactly what they were doing.
↗ Architecture as psychoacoustic engineering. The medieval master builders had an empirical understanding of sound-brain interaction that we're only now quantifying.
Cross-linguistic analysis of 130 languages confirms the Berlin-Kay hierarchy: every language names black and white first, then red, then green/yellow. A new information-theoretic model explains why — it's about maximizing discriminability with minimal vocabulary.
↗ Language evolution follows optimization principles. The same compression logic that drives neural network quantization apparently drove human color naming.
Editor's Take
“Today's through-line is legibility — how systems make themselves readable. DeepMind's origami robot reads human motion through contact geometry. The adaptive typography system reads your eyes reading. The slime mold reads chemical gradients to solve network problems. Even medieval cathedral builders were reading acoustic resonance to engineer transcendence. The most powerful interfaces aren't the ones that display more information — they're the ones that sense how you're already processing it and adapt. That principle connects robotics (tactile skin), HCI (eye-tracking type), and biology (slime mold optimization) in ways their respective fields rarely acknowledge.
Get this in your inbox
Delivered every morning at 7:50 AM MYT. No spam, unsubscribe anytime.