Seroter's Daily Reading — #741 (March 13, 2026)

Audio summary covering A2A Protocol v1.0, BigQuery pipe syntax, agentic code reviews, right-sizing teams for AI, MCP vs CLI debate, Amazon S3 at 20, OpenClaw security analysis, and more.
Seroter's Daily Reading — #741 (March 13, 2026)

Audio summary of Richard Seroter’s Daily Reading List #741 (March 13, 2026).

🎧 Listen


Hey, welcome back. This is a summary of Richard Seroter’s Daily Reading List number 741, from March 13th, 2026. There’s a lot in this one — fourteen links — and a clear theme running through most of them: AI is reshaping how we build software, how we organize teams, and how we think about the tools agents use. Let’s get into it.

First up, the A2A Protocol — that’s Agent-to-Agent — just shipped version 1.0. This is the open standard for how AI agents talk to each other across different platforms and organizations. The big deal with this release is maturity, not reinvention. They’ve added signed agent cards for cryptographic identity verification, multi-tenancy support, and better security flows. The architecture is deliberately web-aligned — JSON over HTTP, gRPC, JSON-RPC — so you can scale agent interactions with the same load balancers and gateways you already use. And they make an important distinction that keeps coming up: A2A is for communication between agents, while MCP is for tool integration within an agent. Most real systems will use both.

Speaking of things from Google, there’s a nice post on BigQuery’s new pipe syntax. If you’ve ever been frustrated that SQL makes you write SELECT first but the engine starts with FROM, this is for you. BigQuery now lets you chain operations with a pipe operator, like a shell script. You start with FROM, pipe into WHERE, pipe into AGGREGATE, and so on. The killer feature is the EXTEND operator, which lets you reference new columns immediately after defining them — something standard SQL won’t let you do. There are also SET and DROP operators for modifying or cleaning up columns mid-pipeline. If you’ve used Splunk’s search language, this will feel very familiar.

Now we get into the real theme of this edition: AI and engineering teams. Daniela Petruzalek from Google Cloud wrote about code reviews in the agentic era. Her take is refreshingly pragmatic — she doesn’t care whether a human or an AI wrote the code. In open source, contributions are already zero-trust. What matters is: does it work, is it safe, does it align with the roadmap? She’s become increasingly higher-level in her reviews, focusing on architecture, public API design, and algorithm choices rather than line-by-line nitpicking. Her philosophy: code is disposable, but the system knowledge you gain writing it isn’t. That knowledge is what survives the AI age.

PostHog ran a piece on what product managers actually do and why engineers should care. The core argument is that with LLMs making building easier, figuring out what to build has become the bigger bottleneck. The top PM skill they highlight is providing context — not just data, but the right framing at the right time. They use the Duolingo example where a single insight about retention rates being five times more impactful than acquisition completely redirected the product roadmap and led to a four-and-a-half-x growth in daily active users. The message to engineers: don’t wait for a PM to hand you context. Build your own discovery systems.

ThoughtWorks published a comprehensive piece on preparing teams for the agentic software development lifecycle. This was the number one topic Seroter hears customers asking about. The article frames it as an organizational transformation, not just a technology upgrade. Engineers evolve from creators to governors — architects and auditors of AI-driven systems. Teams get smaller and more cross-functional. They recommend policy-as-code guardrails, dedicated validation agents that challenge other agents’ work, and a central trust register for tracking agent reliability. The culture shift is the hardest part: cultivating healthy skepticism where you trust agents enough to use them but remain critical of their outputs.

Daniel Block wrote a short, sharp post on right-sizing engineering teams for AI. His argument: AI tools solved the “workhorse” half of engineering — raw output — but not the quality half. If AI triples code output but you don’t increase senior reviewers, the ratio of judgment to code gets three times worse. His recommendation: teams of five to seven people, with at most one junior. A reliable signal you’ve gotten it wrong? Pull requests sitting unreviewed for days, not because people are busy, but because nobody feels confident enough to approve them.

Anthropic researchers published data on how AI is actually reshaping the software engineering labor market — not theoretically, but observationally. The headline finding: there’s an enormous gap between what AI could automate and what it’s actually doing. Theoretical capability suggests LLMs could speed up tasks across ninety-four percent of computer and math occupations, but actual observed coverage is just thirty-three percent. Computer programmers are the most exposed at seventy-four percent. Here’s the concerning signal though: among workers aged twenty-two to twenty-five, the monthly job-finding rate in highly exposed occupations dropped by roughly fourteen percent compared to 2022. The entry point into the profession is shifting, even if experienced workers aren’t being displaced yet. The underlying research paper is the Anthropic Economic Index.

The Harvard Business Review piece on authentic leadership under pressure ties these threads together. When your team is navigating all of this change — AI reshaping roles, team sizes shrinking, junior pipelines tightening — how you lead matters enormously. The article focuses on the overlap of economic uncertainty, technological change, and public scrutiny that today’s leaders face.

Now, the MCP debate. Seroter says it hit a fever pitch this week, and he’s right. CircleCI published a thoughtful breakdown of MCP versus CLI for AI-native development. Their framework: CLIs fit the inner loop — fast, local, zero overhead. MCP servers fit the outer loop — external systems, shared infrastructure, structured access. One benchmark found CLI completed tasks with thirty-three percent better token efficiency because MCP loads its full tool schema into the context window before doing anything useful. But MCP wins when you need authentication, audit logging, and consistent response formats across teams. Most teams need both.

The New Stack took this further with a piece arguing for running AI agents on Markdown files — skills — instead of MCP servers. The idea is that for many agent workflows, a well-structured Markdown file with instructions and examples is more token-efficient and more transparent than a formal protocol server. Seroter’s been testing this himself and agrees the answer is “both” for most cases.

Amazon S3 turned twenty years old. It launched on March 14th, 2006, with a one-paragraph announcement. Today it stores over five hundred trillion objects and handles two hundred million requests per second. The price has dropped eighty-five percent since launch. Perhaps most remarkably, code written for S3 in 2006 still works today, unchanged. Twenty years of infrastructure innovation underneath, complete API backward compatibility on top. That’s what commitment to a building block looks like. Full retrospective on the AWS blog.

O’Reilly published an analysis of what OpenClaw reveals about the next phase of AI agents. The key insight: none of the individual pieces are new — persistent memory, cron jobs, plugin systems, messaging webhooks. What made it take off was wiring them together at the exact moment the underlying models could execute on multi-step plans. The article also highlights a serious security concern: researchers found a hundred and thirty-five thousand OpenClaw instances exposed on the open internet, over fifteen thousand vulnerable to remote code execution.

Which leads naturally to the NanoClaw and Docker partnership. NanoClaw is positioning itself as a security-first alternative in the agent ecosystem, and this integration lets teams run agents inside Docker Sandboxes using MicroVM-based isolation. Docker’s president put it bluntly: agents break every model containers have ever known because the first thing they want to do is install packages, modify files, and spin up processes. Sandboxes give agents room to act without giving them room to damage everything around them. VentureBeat coverage. TechCrunch on NanoClaw’s origin story.

And finally, Google Cloud announced that Identity-Aware Proxy now integrates directly with Cloud Run — no load balancers required, one click to enable, no added cost. If you need authenticated web apps with minimal configuration overhead, this is a nice quality-of-life improvement.

That’s the list. The throughline this week is unmistakable: AI agents are real, they’re in production, and the industry is scrambling to figure out how to organize teams around them, how to review their output, how to give them tools safely, and how to keep them from wrecking things. The answers are still forming, but the questions are getting much sharper. See you next time.


Articles referenced:

  1. A2A Protocol Ships v1.0What’s new in v1.0GitHub
  2. BigQuery pipe syntax by example
  3. How to Do Code Reviews in the Agentic Era
  4. WTF does a product manager do?
  5. Preparing your team for the agentic software development life cycle
  6. Right-Sizing Engineering Teams for AI
  7. How is AI already reshaping the software engineering labor market?Anthropic Economic Index paper
  8. What Authentic Leadership Looks Like Under Pressure
  9. MCP vs. CLI for AI-native development
  10. The case for running AI agents on Markdown files instead of MCP servers
  11. Twenty years of Amazon S3 and building what’s next
  12. What OpenClaw Reveals About the Next Phase of AI Agents
  13. NanoClaw and Docker partner to make sandboxes the safest way for enterprises to deploy AI agentsTechCrunch
  14. Simplify your Cloud Run security with Identity Aware Proxy

No comments yet.