Seroter's Daily Reading — #758 (April 7, 2026)
Listen: https://blossom.nostr.xyz/a0f029a0ebc7e12e1b5969f644975ee8d3f51787d2f33ff7e77b2bdae21517b8.mpga
Episode Image: 
Source: Seroter’s Original Post
Welcome to Seroter’s Daily Reading, episode 758, from April 7th, 2026. It’s a Tuesday, and Seroter mentions it’s feeling later in the week than Tuesday, probably because Google Cloud Next is sold out and the team is heads down preparing for the big tech event. Let’s get into the reads.
First up, Sebastian Raschka with a piece called “Components of A Coding Agent.“ This one really resonated with Seroter because the AI space has suddenly started throwing around terms like “harness” and assuming everyone knows what we’re talking about. Raschka cuts through the confusion with a clear taxonomy. An LLM is the core next-token model. A reasoning model is still an LLM, but one trained or prompted to spend more inference compute on intermediate reasoning. An agent, then, is a control loop that wraps the model and decides what to inspect, which tools to call, how to update state, and when to stop. A coding harness is the software scaffold around that agent loop that manages context, tool use, prompts, state, and control flow. So the LLM is the engine, a reasoning model is a beefed-up engine, and the harness is what gets the most out of it. Raschka argues that vanilla versions of the latest LLMs have very similar capabilities. The harness is often the distinguishing factor that makes one product work better than another. If you dropped the latest open-weight model into a similar harness as Codex or Claude Code, it might perform on par. He breaks down six main components of a coding harness: live repo context (collecting stable facts upfront), prompt shape and cache reuse (not rebuilding everything from scratch on every turn), structured tools with validation and permissions, context reduction and output management, transcripts and memory for session resumption, and delegation to bounded subagents. This is a solid framework for understanding why coding tools feel so much more capable than the same models in a plain chat interface.
Moving on to something completely different, but no less relevant to the tech professional: “Nobody Is Coming to Save Your Career“ from A Life Engineered by Steve Huynh. This one cuts straight. Your manager is not thinking about your career growth right now. Neither is your skip-level. The only person whose job it is to grow your career is you. Huynh spent 18 years at Amazon with over 20 managers. They were mostly good, some great. But not one of them ever came to him unprompted and said, “Let’s talk about your career growth.” Every big opportunity, every promotion, including the one to Principal Engineer, happened because he drove it. He started the conversations, and good managers supported him. The piece lays out three uncomfortable truths. First, your manager is not your career coach. Their job is too big and too reactive to also be a proactive career coach for everyone on their team. The reframe: your manager is your most powerful career resource, but only if you activate them. You need to tell them you want to grow, or their default assumption is that you’re content. Second, your timeline is your responsibility. Most people are passengers in their own careers, waiting for good things to happen to them. Huynh was a passenger for the first ten years at Amazon. When he finally started steering, the pace changed immediately. Third, and this one stings: your company’s incentives are not aligned with yours. The company has figured out the perfect arrangement: you’re good at your job and you don’t cause problems. From their perspective, this is ideal. But from your perspective, it’s a trap. Comfort may feel like stability, but it’s stagnation in disguise if you have ambitions. The push to change that has to come from you.
From HBR, “Burnout Looks Different Across the Org Chart.“ This framing is useful: burnout isn’t one thing with one solution. It manifests differently depending on where you sit in the organization and what you’re accountable for. Early-career employees burn out from ambiguity and lack of control. They’re constantly guessing what good looks like, spending more time decoding expectations than doing the work. The research is clear: lack of control and unclear expectations are stronger predictors of burnout than number of hours worked. Mid-career managers burn out differently. They experience what the article calls “compression.” They have increased responsibility without a corresponding increase in authority or support. They’re absorbing pressure from above while protecting the team below. Many managers aren’t burning out because they’re working long hours. They’re improvising by logging on Sundays and staying half-connected at night to regain control in systems that don’t support focus and clarity. The root cause of burnout is rarely personal failure. It’s usually a design failure. Poor workflows create constant urgency. Misaligned incentives normalize exhaustion. When burnout persists despite individual effort, it signals a breakdown in how power, risk, and reward are structured.
Chrome has new productivity features, including vertical tabs and immersive reading mode (new Chrome features). Seroter says he’s not sold on switching to vertical tabs, but maybe. We’ll see if this catches on.
Then Forbes with “Microsoft’s Agent Stack Confuses Developers While Rivals Simplify.“ This is a critique of Microsoft’s approach to AI agents. The company ships Agent Framework 1.0, but Azure’s agent stack spans too many surfaces: Agent Framework, Copilot Studio, Foundry Agent Service, and probably more. The piece argues that hyperscalers don’t always find the right abstractions or focus. Meanwhile, Google and AWS are offering cleaner developer paths. Platforms are hard. It’s a reminder that being first and being most coherent aren’t the same thing.
David Mohl chimes in with “I Still Prefer MCP Over Skills.“ This one’s for anyone deep in the AI tooling weeds. The AI space is pushing hard for Skills as the new standard, but Mohl thinks that’s a step backward. MCP, the Model Context Protocol, is an API abstraction. The LLM doesn’t need to understand the how; it just needs to know the what. If the LLM wants to interact with a service, it calls the tool, and the MCP server handles the rest. This separation brings advantages: zero-install remote usage, seamless updates when services add new tools, saner authentication handled gracefully, true portability across clients and devices, and sandboxing by default. Skills, by contrast, often require installing a dedicated CLI. But what if you aren’t in a local terminal? ChatGPT can’t run CLIs. Neither can Perplexity or the standard web version of Claude. Skills that rely on CLIs are dead on arrival for many environments. Mohl’s framing: Skills should be pure knowledge layers. Teaching an LLM how to format a commit message or use internal jargon works great. But for giving an LLM actual access to services, MCP is the pragmatic choice. We should be building connectors, not just more CLIs. Seroter says these points resonate with him. He doesn’t want to write giant skill files that he has to store, share, and maintain. MCP does a lot of good things for him.
There’s a paper from arXiv: “Effective Strategies for Asynchronous Software Engineering Agents.“ This is about multi-agent collaboration for software engineering tasks. AI agents have become capable at isolated SWE tasks like resolving GitHub issues. But long-horizon tasks with multiple interdependent subtasks still pose challenges. A natural approach is asynchronous multi-agent collaboration, where multiple agents work on different parts simultaneously. But concurrent edits interfere with each other, dependencies are hard to synchronize, and combining partial progress into a coherent whole is challenging. The paper proposes CAID, or Centralized Asynchronous Isolated Delegation, which uses a central manager, isolated workspaces, and git worktrees to let agents work in parallel without stepping on each other’s code. The approach improved accuracy over single-agent baselines by 26.7% on PaperBench and 14.3% on Commit0. The key insight is that branch-and-merge is a central coordination mechanism, and existing SWE primitives like git worktree, git commit, and git merge enable reliable multi-agent collaboration.
Google has open sourced an experimental multi-agent orchestration testbed called Scion (Google Agent Testbed Scion). Seroter says he got this up and running over the weekend. It’s an interesting take on harness-agnostic orchestration, meaning you can plug in different agent harnesses rather than being locked into one.
From InfoWorld, “27 questions to ask when choosing an LLM.“ Seroter calls it a solid list. Tailor it to whatever you actually care about.
Google’s developers blog covers TorchTPU (TorchTPU on Google Developers Blog), a new engineering stack for running PyTorch natively on TPUs. The core principle is simple: it should feel like PyTorch. A developer should be able to take an existing PyTorch script, change their device initialization to tpu, and run their training loop without modifying core logic. They implemented three eager modes: Debug Eager for stepping through operations one at a time, Strict Eager for asynchronous execution that mirrors default PyTorch behavior, and Fused Eager, which automatically fuses operations on the fly into larger chunks for better performance. Fused Eager consistently delivers 50% to 100% performance improvement over Strict Eager with no user setup required.
Now for the big story: Anthropic says its most powerful AI cyber model is too dangerous to release publicly, so it built Project Glasswing (Anthropic Glasswing | VentureBeat coverage). This is significant. Anthropic has a new frontier model called Claude Mythos Preview that can find and exploit software vulnerabilities at a level that surpasses all but the most skilled human security researchers. Mythos has already found thousands of high-severity vulnerabilities, including some in every major operating system and every major web browser. It found a 27-year-old vulnerability in OpenBSD, one of the most security-hardened operating systems in the world. It found a 16-year-old vulnerability in FFmpeg that automated testing tools had hit five million times without ever catching the problem. It autonomously found and chained together vulnerabilities in the Linux kernel to escalate from ordinary user access to complete control of a machine. Given the rate of AI progress, these capabilities will proliferate, potentially beyond actors committed to deploying them safely. The fallout could be severe. Project Glasswing brings together AWS, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. They’ll use Mythos Preview for defensive security work. Anthropic is committing up to $100 million in usage credits and $4 million in direct donations to open-source security organizations. The goal is to give defenders a durable advantage. On Google Cloud, Claude Mythos is available in private preview on Vertex AI (Claude Mythos on Vertex AI). This is a major development in the intersection of AI capability and cybersecurity.
Finally, “Good APIs Age Slowly“ from Yusuf Aytas. This is a reminder about API design principles. Good APIs don’t win on first impression. They survive change. The piece argues that stable APIs expose less, assume less, and age better over time. Useful reminders about keeping APIs decoupled from frontends and identifying proper boundaries.
That’s the 758th daily reading list. A nice mix this time: deep dives on agent architecture and coding harnesses, career ownership and burnout across organizational levels, critiques of fragmented AI developer platforms, a defense of MCP over Skills, multi-agent coordination research, Google Cloud news on TorchTPU and Claude Mythos, and timeless API design wisdom. Plus the Anthropic Glasswing bombshell. Thanks for listening.
Articles Covered
- Components of A Coding Agent — Sebastian Raschka
- Nobody Is Coming to Save Your Career — Steve Huynh / A Life Engineered
- Burnout Looks Different Across the Org Chart — Daisy Auger-Domínguez / Harvard Business Review
- Get more done with new vertical tabs and immersive reading mode in Chrome — Google Chrome Blog
- Microsoft’s Agent Stack Confuses Developers While Rivals Simplify — Janakiram MSV / Forbes
- I Still Prefer MCP Over Skills — David Mohl
- AI Code Reviews with Gemini CLI on GitHub Enterprise Server — Karl
- Effective Strategies for Asynchronous Software Engineering Agents — Jiayi Geng, Graham Neubig / arXiv
- Google Open Sources Experimental Multi-Agent Orchestration Testbed Scion — InfoQ
- 27 questions to ask when choosing an LLM — InfoWorld
- TorchTPU: Running PyTorch Natively on TPUs at Google Scale — Google Developers Blog
- Project Glasswing — Anthropic
- Claude Mythos Preview on Vertex AI — Google Cloud Blog
- Good APIs Age Slowly — Yusuf Aytas