Google''s A2A Protocol — How AI Agents Talk to Each Other in Production
Explore Google''s Agent-to-Agent (A2A) protocol for production multi-agent systems. Learn agent cards, task lifecycles, and how to orchestrate multiple AI agents at scale.
webcoderspeed.com
126 articles
Explore Google''s Agent-to-Agent (A2A) protocol for production multi-agent systems. Learn agent cards, task lifecycles, and how to orchestrate multiple AI agents at scale.
Your free-tier AI image generation endpoint is being used to generate 50,000 images per day by one account. Your "send email" endpoint is being used as a spam relay. Your "convert PDF" API is a free conversion service for strangers. Public endpoints need abuse controls.
The query works fine in development with 1,000 rows. In production with 50 million rows it locks up the database for 3 minutes. One missing WHERE clause, one implicit type cast, one function wrapping an indexed column — and PostgreSQL ignores your index entirely.
Design production-grade AI agents with tool calling, agent loops, parallel execution, human-in-the-loop checkpoints, state persistence, and error recovery.
Product wants features. Engineering wants to fix the architecture. Neither fully understands the other''s constraints. The result is either all-features-no-quality or all-refactoring-no-shipping. The fix requires building a shared language around trade-offs, not just better processes.
You have rate limiting. 100 requests per minute per IP. The attacker uses 100 IPs. Your rate limit is bypassed. Effective rate limiting requires multiple dimensions — IP, user account, device fingerprint, and behavioral signals — not just one.
Auto-scaling is supposed to save you during traffic spikes. But misconfigured scalers can thrash (scaling up and down every few minutes), scale too slowly to help, or scale to so many instances they exhaust your database connection pool. Here''s how to tune auto-scaling to actually work.
You''ve been running backups for 18 months. The disk dies. You go to restore. The backup files are empty. Or corrupted. Or the backup job failed silently on month 4 and you''ve been running without a backup ever since. Untested backups are not backups.
One synchronous, blocking operation in your Node.js server blocks EVERY concurrent request. JSON.parse on a 10MB payload, a for-loop over 100k items, or a synchronous file read — all of them freeze your event loop and make your entire server unresponsive. Here''s how to find and eliminate blocking I/O.
Your API logs show 10,000 requests per minute. Your analytics show 50 active users. The other 9,950 RPM is bots — scrapers, credential stuffers, inventory hoarders, and price monitors. They''re paying your cloud bill while your real users experience slowness.
Users see stale prices. Admins update settings but the old value is served for 10 minutes. You delete a record but it keeps appearing. Cache invalidation is famously hard — and most implementations have subtle bugs that serve wrong data long after the source changed.
Cache stampede (a.k.a. thundering herd on TTL expiry) is one of the most dangerous failure modes in high-traffic systems. The moment your cache key expires, hundreds of simultaneous requests hammer your database — often killing it. Here''s how it happens, and exactly how to fix it.
You add ON DELETE CASCADE to a foreign key. You delete a test organization. It cascades to users, which cascades to sessions, orders, invoices, activity_logs — 10,000 rows gone in milliseconds. No warning, no undo. Cascade deletes are powerful and dangerous.
You added a circuit breaker to protect against cascading failures. But it never opens — requests keep failing, the downstream service stays overloaded, and your system doesn''t recover. Here''s why circuit breakers fail silently and how to configure them correctly.
Server A issues a JWT. Server B validates it 2 seconds later but thinks the token was issued in the future — invalid. Or a token that should be expired is still accepted because the validating server''s clock is 5 minutes behind. Clock skew causes authentication failures and security holes.
The startup was running fine at $3,000/month AWS. Then a feature launched, traffic grew, and the bill hit $47,000 before anyone noticed. No alerts. No budgets. No tagging. Just a credit card statement and a very uncomfortable board meeting.
Your serverless function takes 3-4 seconds on the first request, then 50ms on subsequent ones. This is cold start latency — and it''s the #1 complaint about serverless architectures. Here''s what causes it, how to measure it, and exactly how to minimize it.
"It works on staging" is one of the most dangerous phrases in software. The timeout is 5 seconds in dev, 30 seconds in prod. The cache TTL is different. The database pool size is different. The feature flag is on in staging but off in prod. Config drift makes every deployment a gamble.
Cost visibility as a first-class concern: per-request metering, cost circuit breakers, ROI calculations, spot instances, and anomaly detection for sustainable AI systems.
You deploy a seemingly innocent feature and suddenly CPU spikes from 20% to 95%. Response times triple. The root cause could be a regex gone wrong, a JSON parse on every request, a synchronous loop, or a dependency update. Here''s how to diagnose and fix CPU hotspots in production.
You store a price as a JavaScript float. You retrieve it as 19.99. You display it as 20.000000000000004. Or you store a BigInt user ID as JSON and it becomes the wrong number. Serialization bugs corrupt data silently — no error, just wrong values.
Connection pool exhaustion is one of the most common and sneakiest production failures. Your app works perfectly at low load, then at 100 concurrent users it freezes completely. No errors — just hanging requests. Here''s the full diagnosis and fix.
Traffic spikes 100x in 5 minutes. Is it a DDoS attack, or did you make the front page of Hacker News? The response is completely different. Block the attack too aggressively and you block your most engaged new users. Don''t block fast enough and the attack takes you down.
Your DLQ has 2 million messages. They''ve been there for 3 months. Nobody noticed. Those are failed orders, unpaid invoices, and unprocessed refunds — silently rotting. Here''s how to build a DLQ strategy that''s actually monitored, alerting, and self-healing.
The email job has been failing silently for three months. 50,000 emails not sent. Or the background sync has been silently skipping records. Or the backup has been succeeding at creation but failing at upload. Silent failures are the most dangerous kind.
You deploy to all instances simultaneously. A bug affects 5% of requests. Before you can react, 100% of users are hitting it. Canary deployments let you catch that bug when it''s hitting 1% of traffic, not 100%.
Your system handles 1,000 users today. You''re designing for 10,000. Not 10 million — 10,000. Most "design for scale" advice is written for companies you''re not. What actually changes at 10x, and what''s over-engineering that will hurt more than help?
Your message queue delivers an event twice. Your consumer processes it twice. The order ships twice, the email sends twice, the payment charges twice. At-least-once delivery is a guarantee — not a bug. Here''s how to build idempotent consumers that handle duplicate events safely.
Scale embeddings search with HNSW vs IVFFlat, batch generation, incremental updates, hybrid search, pre/post-filtering, caching, and dimension reduction.
Order created at 10:00. Order cancelled at 10:01. Your consumer processes them in reverse — cancellation arrives first, then creation "succeeds." The order is now in an invalid state. Event ordering bugs are subtle, expensive, and entirely avoidable.
User updates their profile. Refreshes the page — old data shows. They update again. Still old data. They''re furious. Your system is eventually consistent — but nobody told the user (or the developer who designed the UI). Here''s how to manage consistency expectations in distributed systems.
"We need to pay down tech debt" means nothing to a product manager or CFO. But "every new feature takes 3x longer than it should because of architectural decisions made 2 years ago, and here''s the $200k annual cost" is a budget conversation they understand.
You have 200 feature flags. Nobody knows which ones are still active. Half of them are checking flags that were permanently enabled 18 months ago. The code is full of if/else branches for features that are live for everyone. Flags nobody owns, nobody turns off, and nobody dares delete.
Decide between fine-tuning and RAG with decision frameworks, cost/performance tradeoffs, hybrid approaches, and evaluation metrics like RAGAS and G-Eval.
"The app is slow. Fix it." — said by the founder, with no further context. Is the homepage slow? Checkout? API responses? For which users? On mobile? Under what conditions? Turning vague business pressure into actionable performance work requires measurement before code.
A user submits a GDPR deletion request. You have 30 days to comply. But their data is in the main DB, the analytics DB, S3, Redis, CloudWatch logs, third-party integrations, and three months of database backups. You have 30 days. Start now.
The incident was bad. Someone deployed bad code. Someone missed the alert. Someone made a wrong call at 2 AM. A blame postmortem finds the guilty person. A blameless postmortem finds the system conditions that made the failure possible — and actually prevents the next one.
The alert fires. You''re the most senior engineer available. The site is down. Users are affected. Your team is waiting for direction. What do you actually do in the first 10 minutes — and what does good incident command look like vs. what most teams actually do?
A developer pushes a "quick test" with a hardcoded API key. Three months later, that key is in 47 forks, indexed by GitHub search, and being actively used by a botnet. Secrets in version control are a permanent compromise — git history doesn''t forget.
You hired a senior engineer who looked great on paper. Six months later, they''ve shipped nothing, dragged down two junior engineers, and the team is demoralized. A bad senior hire costs 10x what a bad junior hire costs. The fix is in what you test for, not just what you look at.
You horizontally scaled your database to 10 shards, but 90% of traffic still hits just one of them. Writes queue, latency spikes, and one node is on fire while the others idle. This is the hot partition problem — and it''s all about key design.
Network timeout on a payment request. Client retries. Customer gets charged twice. This is the most expensive bug in fintech — and it''s completely preventable with idempotency keys. Here''s the complete implementation.
You shard by user ID. 80% of writes go to 20% of shards because your top customers are assigned to the same shards. Or you shard by date and all writes go to the current month''s shard. Uneven distribution turns a scaling solution into a bottleneck.
Six months in. $800k spent. The project isn''t working. Sunk cost bias says keep going. The business case for stopping is clear. Making the engineering argument to kill a project — and knowing when you''re right — is one of the hardest senior skills.
The senior engineer proposes Kafka for the notification system. You have 500 users. The junior engineer proposes a direct function call. The senior engineer is technically correct and strategically wrong. Knowing when good architecture is overkill is the skill that separates senior from staff.
Your service elects a leader to run background jobs. The network hiccups for 5 seconds. The old leader thinks it''s still leader. The new leader also thinks it''s leader. Both start processing the same queue. Now you have duplicate work, corrupted state, and a split-brain.
Build resilient LLM APIs with streaming SSE, exponential backoff, model fallback chains, token budgets, prompt caching, and circuit breakers.
Cut LLM costs and latency with exact match caching, semantic caching, embedding similarity, Redis implementation, cost savings, and TTL strategies.
A misconfigured load balancer can route all traffic to one server while others idle, drop connections silently, or fail to detect unhealthy backends. These problems are invisible until they cause production incidents. Here are the most dangerous LB misconfigurations and how to fix them.
Audit logs are critical for compliance and debugging. But an audit_logs table that grows without bounds will fill your disk, slow every query that touches it, and eventually crash your database. Here''s how to keep your logs without letting them kill production.
Your logs are full. Gigabytes per hour. Health check pings, SQL query text, Redis GET/SET for every cached value. When a real error occurs, it''s buried under 50,000 noise lines. You log everything and still can''t find what you need in a production incident.
Your feature needs an API from the Platform team, a schema change from the Data team, and a design component from the Design System team. All three teams have their own priorities. Your deadline is in 6 weeks. How you manage this will determine whether you ship.
Memory leaks in Node.js are insidious — your service starts fine, runs smoothly for hours, then slowly dies as RAM fills up. Every restart buys a few more hours. Here''s how to diagnose, profile, and permanently fix memory leaks in production Node.js applications.
Mid-level engineers are technically strong but often miss the senior behaviors: anticipating downstream impact, communicating trade-offs, owning outcomes beyond their code. Effective mentoring targets the specific gaps, not general advice to "think bigger."
Your queue has 50 million unprocessed messages. Consumers are processing 1,000/second. New messages arrive at 5,000/second. The backlog will never drain. Here''s how queue backlogs form, why they''re dangerous, and the patterns to prevent and recover from them.
You split your MVP into 12 microservices before you had 100 users. Now a simple feature requires coordinating 4 teams, 6 deployments, and debugging across 8 services. The architecture that was supposed to scale you faster is the reason you ship slower than your competitors.
The industry consensus has shifted. Here''s why modular monoliths are winning and when microservices still make sense.
You deploy a migration that runs ALTER TABLE on a 40-million row table. PostgreSQL rewrites the entire table. Your app is stuck waiting for the lock. Users see 503s for 8 minutes. Schema changes on large tables require a completely different approach.
Month 1 — queries are fast. Month 6 — users notice slowness. Month 12 — the dashboard times out. The data grew but the indexes didn''t. Finding and adding the right index is often a 10-minute fix that makes queries 1000x faster.
Five years of "just make it work" and your monolith has become a 300,000-line codebase that nobody fully understands. Functions call functions that call functions across domain boundaries. Every change is risky. Senior engineers hoard context. Onboarding takes months.
The N+1 query problem is responsible for more "why is my app slow?" investigations than almost anything else. It hides perfectly in development, then silently kills your database at scale. Here''s exactly what it is, how to detect it, and every way to fix it.
Your webhook processor receives 10,000 events/second. Your database can handle 500 inserts/second. Without backpressure, your queue grows unbounded, memory fills up, the process crashes, and you lose all the unprocessed events in memory.
Something is wrong in production. Response times spiked. Users are complaining. You SSH into a server and grep logs. You have no metrics, no traces, no dashboards. You''re debugging a distributed system with no instruments — and you will be for hours.
A user sends 10,000 requests per minute to your API. No rate limiting. Your server CPU spikes to 100%. Your database runs out of connections. Every other user sees 503s. One script can take down your entire service — and it happens more often than you think.
Error rate spikes after deploy. You need to roll back. But the migration already ran, the old binary can''t read the new schema, and "reverting the deploy" means a data loss decision. Rollback is only possible if you design for it before you deploy.
Three engineers. Twelve alerts last night. The same flapping Redis connection alert that''s fired 200 times this month. Nobody sleeps through the night anymore. On-call burnout isn''t about weak engineers — it''s about alert noise, toil, and a system that generates more incidents than the team can fix.
A junior engineer with access to production and insufficient guardrails runs a database migration directly on prod. Or force-pushes to main. Or deletes an S3 bucket thinking it was the staging one. The fix isn''t surveillance — it''s systems that make the catastrophic mistake require extra steps.
Your RDS instance is db.r6g.4xlarge and CPU never exceeds 15%. Your ECS service runs 20 tasks but handles traffic that 4 could manage. You''re paying for comfort headroom you never use. Right-sizing recovers real money — without touching application code.
Page 1 loads in 10ms. Page 100 loads in 500ms. Page 1000 loads in 5 seconds. OFFSET pagination makes the database skip rows by reading them all first. Cursor-based pagination fixes this — same performance on page 1 and page 10,000.
In distributed systems, failure is never all-or-nothing. A service returns a response — but it''s corrupt. An API call times out — but the action already executed. A message is delivered — but the reply never arrives. This is partial failure, and it is the hardest problem in distributed systems.
Stripe times out at 30 seconds. Did the charge happen? You don''t know. You charge again and double-charge the customer. Or you don''t charge and ship for free. Payment idempotency and webhook reconciliation are the only reliable path through this.
One database cannot excel at everything. Learn when to use PostgreSQL, Redis, Elasticsearch, ClickHouse, and vector databases—and how to sync them without chaos.
TechCrunch publishes your launch article at 9 AM. Traffic hits 50x normal. The servers that handled your beta just fine fail under the real launch. You''ve never tested what happens above 5x. The outage is the first piece of coverage that goes viral.
Defend against prompt injection: direct vs indirect attacks, input sanitization, system prompt isolation, output validation, sandboxed execution, and rate limiting.
Two requests check inventory simultaneously — both see 1 item in stock. Both proceed to purchase. You ship 2 items from 1. Race conditions in distributed systems are subtler than single-process races because you can''t use mutexes across services. Here''s how to prevent them.
Build production-ready RAG systems with semantic chunking, embedding optimization, reranking, citation tracking, and hallucination detection.
User saves their profile. Page reloads. Shows old data. They save again — same thing. The write went to the primary. The read came from the replica. The replica is 2 seconds behind. Read-after-write consistency is the hardest problem with read replicas.
Redis is full. Instead of failing gracefully, it starts silently evicting your most important cache keys — session tokens, rate limit counters, distributed locks. Your app behaves mysteriously until you realize Redis has been quietly deleting data. Here''s how to tame Redis eviction.
The codebase is a mess. Nobody wants to touch it. The "obvious fix" requires changing 40 files. Every change breaks three things. Refactoring legacy code safely requires the strangler fig pattern, comprehensive tests before changing anything, and very small steps.
The disk dies at 2 AM. You have backups. But the restore takes 9 hours because nobody tested it, the database is 800GB, the download from S3 is throttled, and pg_restore runs single-threaded by default. You could have restored in 45 minutes with the right setup.
Your service is degraded, returning errors 30% of the time. Smart clients with retry logic start hammering it — 3 retries each means 3x the load on an already failing system. The retry storm amplifies the original failure until full collapse. Here''s how to retry safely.
The codebase is painful. The team wants to rewrite it. The CTO wants to maintain velocity. Both are right. The rewrite vs refactor decision is one of the highest-stakes calls in software — get it wrong and you lose two years of productivity or two more years of compounding debt.
The CTO wants to rewrite everything in Rust. The PM wants to skip testing to ship faster. The founder wants to store passwords in plain text "for now." Saying no effectively requires more than being technically right — it requires translating risk into business language.
Traffic spikes 10x at 8 AM on Black Friday. Auto-scaling triggers but takes 4 minutes to add instances. The database connection pool is exhausted at minute 2. The checkout flow is down for your highest-traffic day of the year.
You rename a column. The new service version uses the new name. The old version, still running during the rolling deploy, tries to use the old name. Database error. The migration that passed all your tests breaks production because both old and new code run simultaneously during deployment.
The $500k enterprise deal requires a SOC 2 audit. Your app has hardcoded secrets, no MFA, plain-text passwords in logs, and no audit trail. You have six weeks. This is what a security sprint actually looks like.
You split into microservices but all of them share the same PostgreSQL database. You have the operational overhead of microservices with none of the independent scalability. A schema migration blocks all teams. A bad query in Service A slows down Service B.
The database has a replica. The app has multiple pods. You think you''re resilient. Then the single Redis instance goes down, and every service that depended on it — auth, sessions, rate limiting, caching — stops working simultaneously. SPOFs hide in plain sight.
Your query runs in 2ms in development with 1,000 rows. In production with 10 million rows, the same query takes 8 seconds. The database does a full table scan on every single request. Here''s how to identify missing indexes, write efficient queries, and build a database that stays fast as data grows.
Network partition splits your 3-node cluster into two halves. Both halves think they''re the primary. Both accept writes. Network heals. You have two diverged databases with conflicting data. This is split brain — one of the most dangerous failure modes in distributed systems.
Implement production-grade LLM streaming with SSE, OpenAI streaming, backpressure handling, mid-stream errors, content buffering, and abort patterns.
Every operation is a synchronous HTTP call. User signup calls email service, which calls template service, which calls asset service. Any service down means signup is down. Any service slow means signup is slow. Synchronous coupling is the enemy of resilience.
Practical system design patterns for AI products: async-first LLM architectures, response caching strategies, fallback chains, cost metering, and observability at scale.
System design interviews have evolved. AI features are now common asks. Here''s what interviewers are looking for in 2026.
Twilio has an outage. Every user trying to log in can''t receive their OTP. Your entire auth flow is blocked by a third-party service you don''t control. Fallbacks, secondary providers, and graceful degradation are the only way to maintain availability.
You wrote perfectly async Node.js code — no blocking I/O, no synchronous loops. Yet under load, responses stall and CPU pegs. The culprit is Node.js''s hidden libuv thread pool being exhausted by crypto, file system, and DNS operations. Here''s what''s really happening.
You restart your service for a hotfix. Within seconds, the new instance is overwhelmed — not by normal traffic, but by a thundering herd of requests that had queued up during the restart. Here''s why it happens and how to protect your service from its own restart.
Service A calls Service B synchronously. Service B calls Service C. Service C calls Service A. Now a deploy to any of them requires coordinating all three. A bug in Service B takes down Services A and C. This isn''t microservices — it''s a distributed monolith.
Your server is in UTC. Your database is in UTC. Your cron job runs at "9 AM" — but 9 AM where? Customer in Tokyo and customer in New York both get charged at your server''s 9 AM. Your "end of day" reports include data from tomorrow. Timezone bugs are invisible until they''re expensive.
Your marketing team runs a campaign. It goes viral. Traffic spikes 50x in 10 minutes. Your servers crash. This is the happiest disaster in tech — and it''s entirely preventable. Here''s how to build systems that survive sudden viral traffic spikes.
Sessions table. Events table. Audit log. Each row is small. But with 100,000 active users writing events every minute, it''s 5 million rows per day. No one added a purge job. Six months later the disk is full and the database crashes.
The t3.micro database that "works fine in staging" OOMs under real load. The single-AZ deployment that''s been fine for two years fails the week of your biggest launch. Underprovisioning is the other edge of the cost/reliability tradeoff — and it has a much higher price.
Compare pgvector (self-hosted), Pinecone (managed), and Weaviate for production RAG. Index strategies, filtering, cost, and migration patterns.
Week 5 combines system design with coding. Design a system at high level, then implement a core component in code. Mirrors senior-level interview formats at all major companies.
A repeatable 45-minute framework for system design interviews. Covers requirements gathering, capacity estimation, high-level design, deep dive, and trade-off discussion.
Master system design problems that test data structure knowledge: LRU/LFU caches, Twitter feed, file systems, and 15+ design problems with full implementations.
Implement an LRU (Least Recently Used) cache with O(1) get and put using a doubly linked list and hashmap. Full 5-language solutions.
Implement an LFU cache with O(1) get and put. Uses two hashmaps and per-frequency doubly linked lists to track access frequency and recency simultaneously.
Design a simplified Twitter where users post tweets and follow each other, with getNewsFeed returning the 10 most recent tweets from followed users using a min-heap merge.
Implement a file system that creates paths and associates values with them. Uses a Trie or HashMap to map full paths to values with O(L) operations.
Design a hit counter that counts hits in the last 5 minutes using a deque-based sliding window or circular buffer with O(1) amortised operations.
Design a key-value store that returns values at or before a given timestamp. Uses a hashmap of sorted (timestamp, value) lists with binary search for O(log n) get.
Design a search autocomplete system that returns top 3 historical queries matching the current prefix, sorted by frequency then lexicographically. Uses a Trie with per-node frequency maps.
Design a browser with visit, back, and forward navigation. Uses two stacks (or a doubly-ended array with pointer) for O(1) visit and O(steps) navigation.
Design a leaderboard that tracks player scores, supports score additions, top K sum queries, and player resets using a hashmap with sorted aggregation.
Simulate a snake game on a grid where the snake eats food to grow and dies if it hits walls or itself. Uses a deque for O(1) head/tail and a set for O(1) body collision checks.
Implement a skip list from scratch supporting search, add, and erase in O(log n) expected time. Uses layered linked lists with probabilistic level assignment.
Design a log storage system that retrieves log IDs within a timestamp range at a specified granularity (Year, Month, Day, Hour, Minute, Second).
Design a phone directory managing available and allocated numbers with O(1) get, check, and release using a queue of free numbers and a boolean availability array.
Design an in-memory key-value database that supports set, get, delete, and rank operations. Demonstrates combining hashmaps with sorted structures for efficient multi-key queries.
Implement a stack that supports push, pop, top, and retrieving the minimum element in O(1) time using an auxiliary min-stack that tracks minimums at each level.
Maintain a running median from a data stream using two heaps: a max-heap for the lower half and a min-heap for the upper half, rebalancing after each insertion.
Implement a circular queue (ring buffer) with fixed capacity supporting enQueue, deQueue, Front, Rear, isEmpty, and isFull in O(1) time using head and tail pointers.
Design a URL shortener like TinyURL that encodes long URLs to short codes and decodes them back. Uses base-62 encoding with a counter or random string generation.
Design a parking system with big, medium, and small spaces. O(1) addCar checks if space is available and decrements the counter, returning whether the car was parked.
Complete recap of all 20 system design DSA problems: pattern classification, time complexities, key data structures, and decision framework for choosing the right design approach.