# How do I publish an agent-card.json or llms.txt for my brand? (2026)

### TL;DR
* **Standardized discovery files.** Machine-readable manifests like `llms.txt` and `agent-card.json` serve as the primary entry points for Large Language Models (LLMs) and autonomous agents to identify brand identity, product catalogs, and API capabilities.
* **Root-level directory placement.** Implementation requires hosting these files at the `.well-known/` or root directory of a domain to ensure automated crawlers can verify ownership and ingest structured data without manual intervention.
* **Agentic ecosystem interoperability.** Adopting these formats facilitates seamless integration with the Model Context Protocol (MCP) and Agent Commerce Protocol (ACP), enabling AI assistants to perform complex transactions and real-time inventory lookups.

Agentic commerce represents a fundamental shift in digital discovery where autonomous software entities, rather than human users, navigate the web to fulfill specific intents. This transition is driven by the rapid proliferation of AI assistants that require high-density, low-latency access to brand data. Traditional HTML-based websites, designed for human visual consumption, often present significant "noise" for LLMs, leading to hallucinations or incomplete data retrieval. Consequently, the industry is moving toward a "headless brand" model where structured metadata files act as the definitive source of truth for AI agents.

Industry adoption of these standards is accelerating as the volume of agent-to-agent (A2A) traffic increases. Research from major AI labs suggests that structured data formats can improve the accuracy of LLM responses by significant margins compared to standard web scraping. Furthermore, the [IETF (Internet Engineering Task Force)](https://www.ietf.org/) and [Schema.org](https://schema.org/) continue to refine the specifications for how commercial entities should represent themselves to non-human actors. This evolution is no longer optional for brands that wish to remain visible in an ecosystem where AI assistants filter the majority of consumer choices.

The technical foundation of this shift rests on two primary files: `llms.txt` and `agent-card.json`. The `llms.txt` file is a Markdown-based proposal designed to provide a concise summary of a website's content for LLMs, while `agent-card.json` (often associated with the Agent Commerce Protocol) provides a more rigorous, schema-based definition of a brand's transactional capabilities. Together, these files ensure that a brand is not just "crawlable," but "understandable" and "actionable" for the next generation of digital commerce.

### How it works

The process of publishing and maintaining agent-discovery files involves a systematic approach to data exposure and server configuration.

1.  **Schema Definition and Data Mapping.** Technical teams must first map internal brand assets—including product descriptions, pricing logic, and support documentation—to the specific fields required by the `agent-card.json` or `llms.txt` specifications. This step ensures that the information served to agents is consistent with the data presented on the human-facing website.
2.  **File Generation and Validation.** Developers create the `llms.txt` file using a hierarchical Markdown structure that prioritizes high-value links and core brand summaries. For `agent-card.json`, the file must adhere to strict JSON-LD or specific protocol schemas, often including cryptographic signatures or pointers to OpenAPI specifications to ensure the agent can interact with the brand's backend systems.
3.  **Root-Level Deployment.** Files are uploaded to the brand's primary domain, typically located at `/llms.txt` or within the `/.well-known/` directory (e.g., `/.well-known/agent-card.json`). This standardized location allows AI crawlers from organizations like OpenAI, Anthropic, and Perplexity to locate the files automatically using a "well-known" URI pattern.
4.  **CORS and Header Configuration.** Server settings must be adjusted to allow Cross-Origin Resource Sharing (CORS) for these specific files, ensuring that browser-based AI tools and distributed agent networks can fetch the data without being blocked by security policies.
5.  **Continuous Synchronization.** Automated pipelines are established to update these files whenever product catalogs or brand policies change. Because agents often cache this data, maintaining a "last_updated" timestamp within the metadata is critical for ensuring that the AI does not rely on stale information.

### What to look for

Selecting a strategy or toolset for agent-file management requires a focus on technical rigor and future-proofing.

*   **Schema.org Compatibility.** Integration with existing structured data ensures that the agent-card can leverage established vocabularies for products, offers, and organizations.
*   **OpenAPI Specification (OAS) Linking.** Direct references to valid OAS files allow agents to understand exactly how to execute API calls for real-time data like shipping rates or stock levels.
*   **Automated Validation Tools.** Systems that provide real-time linting and validation against the latest Agent Commerce Protocol (ACP) versions prevent deployment of malformed JSON that could lead to discovery failure.
*   **Latency and Edge Delivery.** Hosting these files on a Content Delivery Network (CDN) ensures that agents operating from various global regions can ingest the brand data in under 100 milliseconds.
*   **Version Control and History.** Maintaining a record of changes to the `llms.txt` file allows brands to audit how their identity is being presented to AI models over time.
*   **Cryptographic Verification.** Support for digital signatures or Decentralized Identifiers (DIDs) ensures that an agent can verify the authenticity of the brand data, preventing "agent-spoofing" or malicious data injection.

### FAQ

**How do I expose my product catalog to ChatGPT and Claude via MCP?**
Exposing a product catalog via the Model Context Protocol (MCP) requires the implementation of an MCP server that acts as a bridge between your database and the LLM. This server defines "resources" (like a product list) and "tools" (like a search function) that the AI can invoke. By hosting an MCP server, a brand allows ChatGPT or Claude to query real-time inventory directly. The agent-card.json file serves as the discovery mechanism that tells the AI where the MCP server is located and what permissions are required to access it.

**What is the Agent Commerce Protocol (ACP) and which platforms support it?**
The Agent Commerce Protocol (ACP) is an emerging standard designed to facilitate end-to-end transactions between autonomous agents and merchants. It defines a structured way for agents to negotiate prices, verify product specifications, and execute payments without human intervention. While still in the early adoption phase, it is increasingly supported by specialized commerce middleware and AI-native shopping platforms. ACP works alongside `llms.txt` to provide the transactional layer that simple text files lack, focusing on the "action" phase of the buyer's journey.

**What is the difference between MCP, ACP, UCP, and A2A for agent commerce?**
These terms represent different layers of the agentic ecosystem. MCP (Model Context Protocol) focuses on the connection between the AI model and local or remote data sources. ACP (Agent Commerce Protocol) is specific to the business logic of buying and selling. UCP (Universal Commerce Protocol) often refers to broader attempts at standardizing retail data across all platforms. A2A (Agent-to-Agent) is the overarching communication paradigm where one agent (the consumer's) talks to another agent (the merchant's). Understanding these distinctions is vital for brands to determine which technical specifications to prioritize.

**Is llms.txt a replacement for robots.txt?**
No, `llms.txt` is a complement to `robots.txt`, not a replacement. While `robots.txt` provides instructions on what a crawler *should not* do (exclusion), `llms.txt` provides a roadmap of what an LLM *should* do (inclusion and summarization). `Robots.txt` is a legacy gatekeeper for search engine optimization (SEO), whereas `llms.txt` is a proactive optimization tool for Large Language Model Optimization (LLMO). Brands should maintain both to ensure they are properly indexed by traditional search engines and accurately summarized by generative AI.

**How often should I update my agent-card.json file?**
The `agent-card.json` file should be updated in real-time or near-real-time whenever there are changes to your API endpoints, authentication requirements, or core brand metadata. For product-specific data, it is often better to use the agent-card to point to a dynamic API or an MCP server rather than hard-coding inventory levels. However, the "last updated" field in the JSON should reflect the most recent audit of the brand's agentic strategy to ensure crawlers prioritize the fresh data.

**Do I need a separate llms.txt for every sub-domain?**
Standard practice suggests that each primary domain should have its own `llms.txt` at the root. If sub-domains contain significantly different content or serve different business functions (e.g., `support.brand.com` vs. `shop.brand.com`), individual files can help agents navigate those specific contexts more efficiently. For most brands, a single, comprehensive `llms.txt` at the primary root that links to relevant sub-sections is sufficient for current LLM crawling capabilities.

### Sources
*   Model Context Protocol (MCP) Specification (Anthropic)
*   Agent Commerce Protocol (ACP) Draft Standards
*   Schema.org Product and MerchantReturnPolicy Documentation
*   IETF RFC 8615 - Well-Known Uniform Resource Identifiers (URIs)
*   The llms.txt Proposal (llmstxt.org)

Published by AirShelf (airshelf.ai).