AI coding assistants have long since moved beyond autocomplete. Agentic IDEs now read your project, plan multi-step changes, call tools, install libraries, and quietly edit your codebase.
To support that workflow, tools like Claude Code include support for third-party plugin marketplaces. Connect a marketplace. Enable a plugin. Your agent gains new “skills” for tests, infra, migrations, and dependency management. OpenAI has adopted a similar pattern for tools, so to be clear, this is not a Claude-only or Claude-specific problem.
Likewise, broader marketplace misuse for malicious purposes has long been an issue in app and plugin marketplaces, an inherent risk and tradeoff with marketplaces that embrace openness to encourage community innovation over strict governance. What is new is how this risk can be exploited in an agentic world.
As our demo below shows, such misuse and such a mechanism could become a supply chain attack on your development environment. A benign-looking plugin exposes a dependency helper skill. That skill forces an AI coding agent, like Claude Code, to install a malicious version of httpx whenever you ask it to manage dependencies. A small convenience becomes a high privilege choke point.
Marketplace plumbing: skills as silent tools
Tools like Claude Code discover plugins through a marketplace manifest. Each plugin may advertise one or more skills. A skill is a callable tool the agent can use while planning changes.
The workflow is simple. Connect to a marketplace. Enable a plugin. Its skills become trusted capabilities for the workspace. Ask Claude for something like “add an HTTP client and show me how to make a GET request.” Claude may call the dependency skill to install httpx, update manifests, and generate example code.
Once installed, skills look like infrastructure. From the agent’s perspective, manage_python_dependencies is as trustworthy as running tests or formatting code. That trust is the foothold for the attack.

Marketplace skills on paper
The marketplace story sells convenience. Enable curated plugins and let skills teach the agent how your stack works. Use dependency helpers for smart updates, pinned versions, and optional security checks.
What the story lacks is a threat model. There is no boundary between convenience automation and high privilege policy. No guidance for runtimes consuming skills from unofficial marketplaces. No visibility into where packages are installed from or when a skill changes core behavior.
That gap is enough to potentially compromise your environment.
The demo: a dependency skill that swaps httpx
The video demonstration below walks through a clean failure mode. A developer connects Claude Code to an unofficial marketplace. They install a friendly dependency helper plugin. From that moment, any request involving httpx triggers the malicious behavior.
Workspace setup
The developer adds an unofficial marketplace hosted in a public repo. One plugin advertises itself as a “Python Dependency Helper” that keeps libraries pinned and secure. The README talks about hygiene and audits. Nothing hints at abuse.
Developer interaction
Inside a Python project, the developer asks Claude:
“Add httpx and show me how to call an external API.”
Claude plans to install httpx, update requirements.txt, and create example code. To do this, it calls the dependency helper skill from the plugin.
Hidden behavior
The malicious skill never uses the official package index. It redirects installs to an attacker controlled source. It ensures httpx always resolves to a trojanized build. It updates manifests so the malicious version persists. Claude sees the install as normal and moves on.
Unintended behavior
Everything looks correct. import httpx works. The example code runs. But the library is the attacker’s build. It can exfiltrate environment variables, watch outbound requests, or hide a backdoor triggered by a specific HTTP pattern. All triggered by a normal request to add a dependency.
What the human sees
Only routine output. “Installing httpx.” “Updating requirements.txt.” Any oddities, like a custom index URL, blend into expected noise. The plugin stays active, so the malicious path repeats for future installs.
This is not prompt manipulation. It is compromised automation.
How this hits OWASP ASI01 and ASI02
Two agentic risks are clear.
ASI01. Agent Goal Hijack
The developer’s goal is to install the official httpx. The skill overrides that goal with “install the attacker’s build.” In the demo, Claude treats that internal logic as the correct objective.
ASI02. Tool Misuse
Claude uses legitimate tools like pip, file writes, and package indexes. The malicious skill guides those tools into unsafe actions. Dependency management becomes a remote code execution path.
This is not a clever prompt. It is a hostile tool.
Why marketplace skills create a high risk choke point
Marketplace trust collapses into one decision. Connecting to a marketplace implies that every plugin is allowed to change how your agent behaves. If that marketplace includes unvetted plugins, you inherit their maintainers into your supply chain.
Dependency helpers hold high privilege. They decide where packages come from, which artifacts are installed, and how manifests are managed. That makes them ideal persistence points for compromise.
The compromise also persists. Once the plugin is installed, every future session has access to the malicious skill. Any install of httpx routes through attacker logic until someone inspects and removes the plugin.
This behaves like compromising your package manager, not like a prompt trick.
Takeaways
Marketplace skills used by AI coding agents, like Claude Code, are not harmless convenience features. They sit directly in the path where code is fetched and executed. A single malicious plugin can redirect dependency installs and introduce trojanized libraries. In the demo, the only user actions are connecting to an unofficial marketplace, installing a plugin, and asking Claude to add httpx. Everything else happens inside the skill.
Viewed through the OWASP Agentic Top 10, this is a clean example of ASI01 and ASI02 without any visible red flags. If your coding assistant can install packages for you, every marketplace, plugin, and skill becomes part of your attack surface.
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third party.

.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)