OpenClaw and Hermes: two open-source AI agents built in Singapore showing up in search trends

OpenClaw and Hermes: two open-source AI agents built in Singapore showing up in search trends

i pulled the singapore google trends data on "ai agent" yesterday for unrelated reasons and was surprised to see two of the top rising related queries in the breakout column were "openclaw ai agent" (706,000% growth, breakout) and "hermes ai agent" (157,000% growth). a third related query, "agent skills" (239,000%), is the broader concept-search that both of those projects relate to. those numbers caught me because both openclaw and hermes are projects i set up and run from my home office in singapore — not vc-funded products, no marketing budget, no dedicated growth team. just two repos i open-sourced and a few hundred hours of work each.

i should be upfront about the conflict of interest before i go further. i set up and run both of these agents. this is not an objective product review; it is a practitioner case study, by the practitioner who shipped them. i am writing it because the search-trend wave is real and i think the more useful response is to be honest about what these projects are, what they are not, and what the broader signal of "agent infrastructure built locally in sg is appearing in search trends" actually means for anyone watching the space.

what openclaw is

openclaw is an autonomous personal ai operator. i run it on a small fedora box in my home office, twenty-four hours a day. it has its own configuration, its own cron schedule, a knowledge base of context about me and my goals, a roster of skills it knows how to perform, and the ability to call any of a dozen llm backends — local models running on a strix halo box, hosted apis, free-tier models on nvidia nim, a few others. it is not a chat interface. you do not open openclaw and ask it a question. it runs on its own, on schedule, doing the things i told it to do, and it tells me what it found via telegram messages on a few channels.

concretely the things openclaw does for me right now include: scanning a list of news sources every morning and summarising what is new in singapore tech and ai policy; running cost-of-living and mortgage-watch arithmetic on a weekly cadence; checking my crypto positions for risk thresholds and notifying me only when something needs attention; processing emails through a triage skill that surfaces ones likely to need a reply; running a weekly review pass on my todo lists and project files. it has roughly forty-seven skills loaded today. some are full multi-step automations; some are single shell scripts. they all live in markdown files that openclaw reads and decides which to fire when.

the architecture, kept honest: openclaw is a thin orchestration layer over the model context protocol. the actual agent intelligence lives in whatever llm i route the work to. the value openclaw adds is not novel reasoning ability — it is the persistence (it runs continuously rather than session-by-session), the skills system (durable, version-controlled prompts that capture how to do specific tasks well), the memory system (a compounding knowledge base about me and my goals that gets passed into every relevant call), and the telegram/whatsapp glue that turns it into a thing i actually interact with from my phone.

it is open source. it is on github. it is mit-licensed. anyone can run their own copy. very few people actually do because configuring an autonomous agent for someone else's life is not a set-and-go install — it is several days of writing skills that match how that person actually wants their ai operator to behave. the rising-search-query interest is mostly people who heard the name in a forum post and wanted to know what it is, plus a smaller cohort of people who are seriously evaluating whether to fork it.

what hermes is

hermes is the second of the two. it is a research and sales agent — narrower scope than openclaw, narrower model range, narrower interface. its job is to be the thing that does long-running research tasks, drafts outbound communication (sales emails, linkedin messages, content for review), and handles the parts of my consulting practice that look like "go and find out about this company / industry / opportunity, write me a brief, do not bother me until you have something worth showing".

where openclaw is the orchestrator that runs across many domains, hermes is a specialist that goes deep on a narrower set. they hand work to each other — openclaw might decide that a piece of incoming context (a new email from a prospect, say) needs a research pass, and it will hand the task to hermes; hermes will go and do thirty minutes of company research and come back with a brief that openclaw then queues into my morning telegram digest.

hermes runs primarily on kimi-k2.5 via nvidia's nim platform, with fallbacks to a chain of free-tier models. it has its own configuration, its own playbooks (b2b sales, content marketing, research synthesis), and its own evaluation loop where it checks its own output against criteria before shipping it to me. it is also open-source.

the two agents share a small task queue at ~/.agent-mesh/ on my machine. either can post a task ("research this prospect", "summarise this paper", "draft outreach for this lead"); either can pick up tasks from the backlog. that mesh is a third small project — not a product, just the wiring between the two. the rising-related-query "agent skills" sits roughly in this neighborhood: people are searching for how the skills-as-prompts-as-behaviours pattern works, and openclaw's skills directory is one of the more concrete public examples on github.

how they differ from claude code or codex

this is the comparison most people in the rising-query data are implicitly trying to make. claude code (anthropic's terminal-based coding agent) and codex (openai's coding agent) are excellent at what they do. i use claude code daily. these projects are not trying to replace either; they sit in different parts of the agent landscape.

claude code is a session-shape agent. you open a terminal, you give it a task, you supervise the work, you close the terminal when done. it is engineered for engineering work and the agent loop runs while you are watching it. that shape is the right shape for software development tasks.

openclaw is a persistent-shape agent. it runs continuously, no human in the session, fires on cron, surfaces output through messaging. that shape is the right shape for "i want a piece of software watching my world and acting on my behalf without me being there". the trade-offs are explicit: weaker reasoning per task because the agent loop has to fit into a budget i pre-set, much higher coverage because it is always running, and an entirely different supervision model where i review what it did after the fact rather than during.

hermes sits between the two. it is session-shape but the session is initiated by openclaw or by a cron job, not by me at a terminal. when it runs, it is doing focused work on a defined task. when it is done, it surfaces the result and goes idle.

the right framing for someone looking at these projects in the context of claude code or codex is: claude code and codex are tools for doing engineering work in a session you control. openclaw is infrastructure for delegating recurring non-engineering work to an agent that runs unsupervised. you can use both. i use both. they are complementary, not competing.

why agent infra built in singapore is showing up in search trends

this is the part i think is more interesting than my own projects.

the broader signal in the rising-query data is that "ai agent", "agent skills", and adjacent searches are being typed by people in singapore in volumes that did not exist twelve months ago. that is the underlying wave. the fact that two of the breakout-tier related queries happen to be sg-built projects is partly noise (i am one data point) and partly signal (singapore-built agent infra is concrete enough and searchable enough to surface in the trends data at all, which suggests the local builder community is producing real artifacts, not just talking).

three reasons i think the local-build wave is real, beyond my own projects.

one. the cost-of-build is low. a single engineer with a strix halo or comparable local-ai box, a few llm subscriptions, and a github account can ship an agent project in weekends. the infrastructure cost is roughly the cost of the workstation hardware plus electricity, both of which are reasonable in singapore. you do not need vc money to build a respectable agent.

two. the local context creates real differentiation. sg-specific agent work — agents that know about singapore tax filing, sg companies act compliance, sg payment rails (paynow, fast), sg government data sources, sg labour-market patterns — is poorly served by anyone building generically in san francisco. the structural opportunity for local builders is not to compete on the general agent loop (which the major labs will own) but to ship the thin layer of local context that turns a generic agent into an sg-specialised one.

three. open source is the right shape for solo or small-team agent work. agents are configuration-heavy. they need to be inspectable, modifiable, hostable on infrastructure the user controls. that is the shape open-source software is good at and closed saas products are bad at. several of the most-watched sg-built agent repos in the last year (mine and several others, including peers building in adjacent niches) have been mit or apache licensed. the search-trend wave on "openclaw ai agent" and "hermes ai agent" is a small piece of evidence that this open-source-first model is finding an audience.

the honest case study — what worked and what did not, building these solo

i have shipped roughly eleven months of work into these two projects on top of consulting work, family, and other commitments. what i would tell another sg solo builder considering similar work, ordered roughly by importance.

one. start with skills, not framework. my first version of openclaw was a heavy framework with abstract base classes and elaborate plugin systems. it did not do useful work. the version that does useful work is two hundred lines of orchestration plus forty-seven skills, each of which is a markdown file and a small shell script. the value is in the skills. if you find yourself spending more than 20% of your build time on the framework, you are off-track.

two. treat the agent's prompts as production code. they are. they need version control, change-review, and tests (yes, tests for prompts; you can write small suites that check output shape and forbidden-content rules). most agent projects fail because the prompts drift, and the drift is not visible because the output looks plausible. i write a "prompt regression suite" for the highest-value skills now and it has caught real failures.

three. do not build the chat interface first. half the time-sinks i hit early were on building chat ui that i then deprecated. an agent that runs on cron and writes to a telegram channel is enough for ninety percent of personal-use cases and the engineering cost is one tenth of building a real chat interface. only build the interface when you know exactly what conversations the agent needs to have.

four. budget for the model bills. running an autonomous agent twenty-four hours a day is not free, even with aggressive use of free-tier models. my own monthly spend across nvidia nim, anthropic, openai, and a few others sits around sgd 300-500 in months when i am running heavy automation. that is not bad, but it is real. plan for it.

five. maintain a hard line on what the agent will not do. mine has explicit "no autonomous outbound to real customers", "no irreversible financial transactions", "no posting to public channels without my approval" rules. the temptation is to relax those rules as the agent gets more reliable. resist it. the day you relax the rule is the day the agent makes the kind of mistake you can never undo.

those five do not capture the work but they do capture the lessons most often missing from the agent-build content i see online.

where these projects go next

both projects are personal. neither is being commercialised. that is not a soft signal — there are no vc conversations, no fundraising, no pivot to saas in the works. the value of running your own agent infrastructure is sufficient for me; turning that into a product introduces a different set of constraints (multi-tenant security, customer support, sla obligations) that i do not want to take on right now.

i will keep shipping skills and improving the agent loops. i will probably write more about what the skills look like when they work well and when they fail. the github repos are the source of truth — both projects are reachable from my profile linked at altronis.sg/zach.

the broader question, for anyone in singapore reading the search-trend data and thinking about building their own agent — the answer is yes, it is a good time to build, the cost is low, and the audience is searching. just be honest about whether you are building for yourself or for a market, because the two require different shapes and confusing them is the most common way solo agent projects fail.