
AWS has tied its open-source Strands framework to a new AgentCore harness that cuts agent deployment to three API calls, targeting enterprise friction in scaling agentic AI.
Amazon Web Services has expanded its open-source agent strategy with a managed AgentCore harness built on Strands Agents, allowing developers to deploy autonomous agents through configuration rather than orchestration code and bring them online in three API calls.
The move places open source at the centre of AWS’s push to reduce enterprise deployment friction, with the harness handling reasoning, tool selection, action execution and response streaming inside a dedicated microVM per session. AWS is positioning infrastructure abstraction, rather than model quality, as the next battleground for agentic AI adoption.
The model-agnostic architecture supports Amazon Bedrock models alongside OpenAI and Google Gemini, while preserving framework neutrality through support for Strands, LangGraph, LlamaIndex and CrewAI. Teams can also export agents into Strands code as workloads grow more complex.
AWS paired the preview with an AgentCore CLI for infrastructure-as-code workflows, a persistent filesystem for long-running agents, and coding agent skills for tools including Kiro, with Claude Code, Codex and Cursor support rolling out. AWS said there is no separate charge for the harness, CLI or skills.
Available in four regions in preview, the release strengthens AWS’s position against rival agent platforms from Google, Microsoft and OpenAI, while underscoring a bet that managed agent runtimes with open-source interoperability could become a foundational enterprise AI layer.
Limits remain, including regional restrictions, a complexity ceiling for configuration-led agents and unanswered questions around benchmarks and portability.














































































