AXME Cloud · 3 min read

Your AutoGen Agents Can't Talk Across Machines. Here's the Missing Piece.

AutoGen handles multi-agent conversations beautifully - inside one process. Put agents on different machines and you're back to building message brokers from scratch.

AutoGen is great at multi-agent conversations. AssistantAgent talks to UserProxyAgent. They go back and forth. LLM reasons, tools execute, results flow.

All inside one Python process.

Put the analyzer agent on Machine A and the processor agent on Machine B - because they need different GPUs, different permissions, different scaling profiles - and suddenly you’re not writing agent logic anymore. You’re writing distributed infrastructure.

What Cross-Machine AutoGen Actually Requires

# Machine A: analyzer sends result to Machine B
import celery

app = celery.Celery("agents", broker="redis://redis:6379/0")

@app.task(bind=True, max_retries=3, default_retry_delay=60)
def send_to_processor(self, analysis_result):
    try:
        # Serialize AutoGen conversation state
        serialized = json.dumps(analysis_result)
        # Push to queue for Machine B
        result = processor_task.apply_async(
            args=[serialized],
            queue="processor_queue",
            expires=3600,
        )
        return result.id
    except Exception as exc:
        self.retry(exc=exc)

# Machine B: processor receives and runs
@app.task(bind=True)
def processor_task(self, serialized_result):
    result = json.loads(serialized_result)
    # Recreate AutoGen agents from scratch
    processor = AssistantAgent("processor", llm_config=config)
    proxy = UserProxyAgent("proxy", code_execution_config=exec_config)
    # Run verification conversation
    proxy.initiate_chat(processor, message=f"Verify: {result}")
    return proxy.last_message()

Plus: Redis deployment. Celery worker config. Queue monitoring. Result backend. Serialization/deserialization of AutoGen state. Error handling when Machine B is down. Dead letter queue for failed deliveries. Health checks.

You wanted two agents to talk. You got a distributed message broker.

What If the Infrastructure Was Already There?

# Machine A: analyzer sends result via AXME
from axme import AxmeClient, AxmeClientConfig

client = AxmeClient(AxmeClientConfig(api_key=os.environ["AXME_API_KEY"]))

intent_id = client.send_intent({
    "intent_type": "intent.analysis.process.v1",
    "to_agent": "agent://myorg/production/processor",
    "payload": {
        "analysis_id": "data-q1-2026",
        "findings": analysis_result,
        "confidence": 0.94,
    },
})
result = client.wait_for(intent_id)

# Machine B: processor listens
for intent in client.listen("agent://myorg/production/processor"):
    # Run AutoGen verification
    proxy.initiate_chat(processor, message=f"Verify: {intent.payload}")
    client.resume_intent(intent.id, {"verified": True, "output": proxy.last_message()})

No Redis. No Celery. No queue config. No serialization code. No dead letter queue.

Machine A sends. AXME delivers to Machine B over SSE. Machine B processes and responds. If Machine B is down, AXME retries. If Machine B crashes mid-processing, AXME redelivers.

What Changes in Your AutoGen Code

Almost nothing. Your agent conversations stay the same. The framework handles the LLM reasoning and tool execution. AXME handles the part between machines.

AutoGen onlyAutoGen + AXME
Agent conversationsAssistantAgent + UserProxyAgentSame
LLM callsOpenAI/Anthropic/localSame
Tool executioncode_execution_configSame
Cross-machine deliveryYou build (Redis/Celery/RabbitMQ)Platform handles
Retry on failureYou buildAutomatic (3 attempts)
TimeoutYou buildConfigurable deadline
Human approval gateYou buildAdd to scenario
Crash recoveryTask lostAutomatic redelivery

Adding a Human Gate

This is where it gets interesting. Say the processor agent finds something suspicious in the analysis. Before finalizing, a human needs to verify.

With raw infrastructure, that’s another 200 lines - notification service, callback endpoint, polling loop, timeout handler.

With AXME, it’s a gate in the scenario definition. The intent pauses, the human gets notified, and the agent resumes when they approve. Same pattern as any other AXME human task.

Not Just AutoGen

The same pattern works for any framework combination:

  • LangGraph agent on Machine A -> CrewAI agent on Machine B
  • OpenAI Agents SDK on Machine A -> Pydantic AI on Machine B
  • Raw Python on Machine A -> Google ADK on Machine B

The frameworks don’t need to know about each other. AXME is the coordination layer between them.

Try It

Working example - AutoGen analyzer on Machine A sends findings to AutoGen processor on Machine B, with human approval gate:

github.com/AxmeAI/autogen-cross-machine-handoff

Built with AXME - cross-machine agent coordination with durable lifecycle. Alpha - feedback welcome.

More on AXME Cloud