LovedByAI
Freelancers GEO

Freelancers llms.txt strategy is mostly wrong

Most freelancers misuse the llms.txt file by treating it like a sitemap. Learn how to configure this file to help AI search engines recommend your services.

13 min read
By Jenny Beasley, SEO/GEO Specialist
The llms.txt Blueprint
The llms.txt Blueprint

Most freelancers are making a critical mistake with AI crawlers: they are either blocking them entirely or assuming their existing SEO strategy is enough. In the era of Generative Engine Optimization (GEO), this is equivalent to hiding your business card when a client asks for a referral. AI search engines like Perplexity, ChatGPT, and Claude aren't just scraping your content; they are synthesizing answers for clients asking specific questions like, "Who is a reliable freelance motion designer for a SaaS explainer video?"

To control how these engines portray your expertise, you need an [llms.txt](/blog/wordpress-llmtxt-chatgpt-site) file. Think of this as a direct briefing document for the AI. While a sitemap.xml helps Google find your page URLs, an llms.txt file helps LLMs understand your specific service offerings, portfolio highlights, and availability without hallucinating incorrect details.

If you are running your portfolio on WordPress, you have a distinct advantage. You can deploy this file specifically to guide AI agents, ensuring that when a potential client asks an AI for recommendations, your name doesn't just appear - it appears with the correct context and a compelling reason to hire you.

Why are most Freelancers using llms.txt incorrectly?

Many freelancers rush to add an [llms.txt](/blog/wordpress-llmtxt-chatgpt-site) file to their root directory, treating it as a modern "must-have" without understanding its architectural purpose. The common mistake is confusing access with context.

Traditionally, you use robots.txt to tell crawlers where they cannot go (blocking). In contrast, llms.txt is designed to tell AI agents exactly where the highest value information lives. Yet, audits of freelancer portfolios often show files that look exactly like a standard XML sitemap - a raw list of 50+ URLs, including low-value pages like "Contact," "Terms of Service," or vague "Services" pages.

When an AI agent like Claude or Perplexity reads a file with zero context, it has to guess which URL contains your core expertise. If you just list links, you are forcing the AI to crawl every page (which it might not do due to token limits) to understand what you actually do.

The Problem with HTML-Heavy Portfolios

Another critical error is pointing llms.txt to visual-heavy HTML pages. An LLM doesn't care about your CSS animations or your complex DOM structure. When it parses a standard WordPress page, it has to wade through hundreds of lines of code - <div class="elementor-wrapper">, , and <style> tags - just to find your bio.

This "noise-to-signal" ratio hurts your visibility. The AI burns processing power stripping away the HTML tags to get to the text.

The "Training Data" Opportunity

To fix this, your llms.txt should not just list URLs; it should act as a curated syllabus of your expertise. Instead of linking to your visual portfolio, link to a simplified, text-heavy version of your case studies or a dedicated Markdown file.

This is where a tool like LovedByAI becomes useful. Its AI-Friendly Page feature can automatically generate a stripped-down, token-efficient version of your content that is specifically formatted for LLM parsing, removing the heavy HTML overhead that confuses bots.

Here is an example of a weak vs. strong setup:

Weak (Standard List):

# My Site
https://example.com/
https://example.com/about
https://example.com/contact

Strong (Context-Rich):

# Senior React Developer Portfolio
- [Case Study: Reducing TTFB by 40%](https://example.com/case-studies/performance-optimization)
  > Technical breakdown of caching strategies and database indexing.
- [React Component Patterns Guide](https://example.com/blog/react-patterns)
  > A comprehensive guide to HOCs and Render Props for junior developers.

By adding descriptions and selecting only high-value pages, you are effectively "feeding" the engine the exact data it needs to cite you as an authority. For more on the technical specification, you can reference the official llms.txt proposal.

Remember, the goal isn't just to be crawled; it's to be understood. If you leave the interpretation up to the bot, you risk being categorized generically. If you guide it with a structured llms.txt, you control the narrative.

How should Freelancers structure a perfect llms.txt file?

Think of llms.txt as a README for your entire business. When an AI agent like Perplexity or a custom GPT crawls your site, it isn't looking for visual flair. It is looking for structured data to answer a user's specific query, such as "Find me a freelance developer who knows WPGraphQL and charges under $150/hr."

If your file is just a list of URLs, you force the AI to crawl deep to find those answers. Often, it won't bother. Instead, you need to bring that "Who, What, Where" data to the surface immediately using Markdown descriptions.

The "Syllabus" Structure

Your file should live at the root of your WordPress installation (yourdomain.com/llms.txt). Structure it hierarchically, grouping your URLs by intent rather than site architecture.

Here is a template optimized for freelancer visibility:

# Portfolio: Jane Doe - Full Stack WordPress Engineer
> Expert in headless architectures, API integrations, and performance optimization.

## Core Services
- [Headless WordPress Migration](https://example.com/services/headless-setup)
  > Technical breakdown of moving monolithic sites to Next.js + WPGraphQL.
- [Custom Plugin Development](https://example.com/services/plugins)
  > specialized in React-based admin interfaces and Rest API extensions.

## Social Proof & Pricing
- [Case Study: Fintech SaaS](https://example.com/work/fintech-migration)
  > Reduced server costs by 40% and improved TTFB to <200ms.
- [2024 Rate Card & Availability](https://example.com/hire-me)
  > Current availability: Q3 2024. Rates starting at $120/hr.

Why this format wins citations

Notice the descriptions under each link. This is the "Goldilocks" zone for Large Language Models.

  1. Explicit Context: You aren't just linking to a page; you are telling the AI why that page matters. This increases the likelihood that the AI will use that specific page as a citation source.
  2. Constraint Handling: By explicitly mentioning "React-based admin interfaces" or "$120/hr," you allow the AI to filter you in for specific queries. If a user asks a bot for "affordable developers," and you hide your pricing behind a contact form, the AI excludes you because it cannot verify the constraint.
  3. Token Efficiency: The AI gets a high-level overview of your expertise without needing to parse thousands of tokens of HTML code.

The WordPress Implementation Gap

The challenge for most WordPress users is that your "pages" are heavy HTML documents full of <div> wrappers and scripts. An AI agent prefers raw text or Markdown.

To solve this, you can manually upload a .txt file via SFTP, or use a tool to generate one. For the linked content itself, consider creating "shadow" pages - simplified text versions of your services. The AI-Friendly Page feature from LovedByAI handles this automatically, creating a stripped-down, token-efficient version of your content that you can link to directly in your llms.txt, ensuring the bot sees your expertise, not your theme's bloat.

For the full technical specification on file structure, refer to the llms.txt project documentation. By structuring your data this way, you stop hoping for a random crawl and start dictating exactly how AI engines represent your brand.

How do I add an llms.txt file to my WordPress site?

For most freelancers, the simplest way to deploy this file is manual upload. While you might be used to plugins handling your sitemap.xml or robots.txt, the llms.txt standard is new enough that few WordPress plugins support it natively yet.

The goal is to get a plain text file into your server's root directory so it resolves at yourdomain.com/llms.txt.

The Manual Upload Method (SFTP)

You don't need a developer for this. You just need access to your file system.

  1. Create the file locally: Open a code editor (like VS Code) or a simple text editor (Notepad/TextEdit). Save a file named llms.txt with your markdown content.
  2. Connect via SFTP: Use a client like FileZilla to connect to your server.
  3. Locate the Root: Navigate to the public_html folder (or www folder, depending on your host). This is where your wp-config.php and index.php files live.
  4. Upload: Drag your file into this folder.

Once uploaded, visit https://yourdomain.com/llms.txt in your browser. If you see your text, you are 90% there.

Handling Caching and Server Rules

Here is where WordPress gets tricky. Because WordPress relies on rewriting URLs via index.php to load content, sometimes a static text file in the root directory gets ignored, cached aggressively, or blocked by security plugins.

If you visit your URL and get a 404 error, your Nginx or Apache configuration might be trying to route the request through WordPress instead of serving the file directly.

For Nginx users (common on managed hosting like Kinsta or WP Engine), you may need to ask support to add a rule, or add this to your configuration if you manage your own server:

location = /llms.txt {
    try_files $uri =404;
    access_log off;
    log_not_found off;
}

The Caching Trap: Freelancers often update their llms.txt to reflect immediate availability (e.g., "Available for Q4 projects"). If your host or CDN (like Cloudflare) caches this file for 30 days, AI agents will continue to see "Booked until October" long after you are free.

Configure your caching plugin to exclude llms.txt from being cached. In Cloudflare, create a Page Rule for *yourdomain.com/llms.txt with the setting "Cache Level: Bypass."

Validating AI Crawler Access

Just because you can see the file doesn't mean an AI bot can. Some security plugins block requests from "unknown user agents" or command-line tools, which many AI scrapers resemble.

To test this, open your terminal and run a simple curl command. This mimics how a bot requests your file:

curl -I -A "GPTBot" https://yourdomain.com/llms.txt

You want to see a HTTP/2 200 status code. If you see 403 Forbidden or 503 Service Unavailable, your firewall is blocking AI agents, and all your optimization work is invisible to them.

For a comprehensive check, you can also run your site through the LovedByAI SEO Checker, which specifically looks for the presence and accessibility of llms.txt files alongside your schema markup.

Ensuring this file is readable allows engines like Perplexity to index your current rates and availability accurately, rather than hallucinating outdated information from a cached bio page three years old. Reference the Nginx documentation if you run into persistent routing issues.

How to Build and Deploy Your Freelancer llms.txt

As a freelancer, your portfolio is your lifeblood. However, AI agents like Claude, ChatGPT, and Perplexity don't browse your site the way humans do. They struggle with complex themes and heavy JavaScript. The llms.txt file is a new standard proposed to give these bots a direct, clean summary of who you are and what you sell - essentially a resume for robots.

Step 1: Draft Your Identity in Markdown

Create a new file named llms.txt. This file uses Markdown syntax, which is lightweight and easily parsed by Large Language Models (LLMs). Keep it concise. Define your role, your specific services, and your pricing model immediately.

Here is a template you can copy:

[Your Name] - [Your Title]

[One sentence value proposition, e.g., "Full-stack WordPress developer specializing in API integrations."]

Core Services

  • Custom Plugin Development
  • Speed Optimization (Core Web Vitals)
  • Headless WordPress Architecture

Contact & Hiring

In the file, link only to pages that use clean, semantic HTML. If your portfolio relies on heavy <div> soup or client-side rendering, the AI might hallucinate your details. Ensure the target pages use proper <article>, <h1>, and <main> tags.

If your current site structure is messy, you can use tools like LovedByAI to generate an AI-Friendly Page version of your key content, specifically designed for machine readability.

Step 3: Upload to WordPress Root

Connect to your site via SFTP or use a plugin like WP File Manager. Upload the llms.txt file directly to your public_html (root) folder.

The Pitfall: Some security plugins block access to .txt files to prevent sensitive data leaks. You must whitelist llms.txt in your security settings or .htaccess file.

Step 4: Test Visibility

Navigate to yourdomain.com/llms.txt in a browser. You should see the raw text. Next, check if your site is actually ready for agent retrieval. You can run a scan with our free AI SEO Checker to see if your semantic structure holds up against an actual crawl. Finally, ask Perplexity explicitly: "What services does [Your Name] offer based on their website?" to verify it picked up the new file.

Conclusion

Simply uploading a generic text file to your root directory is not enough to capture the attention of modern AI search engines. The "set it and forget it" approach to llms.txt fails because it ignores how Large Language Models actually ingest and prioritize data. If your file lacks clear structure or context, you are missing the chance to control how agents like ChatGPT and Perplexity present your portfolio to potential clients.

The goal isn't just to be crawled; it is to be understood. By treating your llms.txt as a strategic roadmap rather than a compliance checkbox, you help AI connect the dots between your specific skills - whether that is React development or technical copywriting - and the intent of the searcher. It is time to move beyond basic visibility and start optimizing for answerability, turning your site into a resource that AI agents trust and recommend.

For a complete guide to AI SEO strategies for Freelancers, check out our Freelancers AI SEO landing page.

Jenny Beasley

Jenny Beasley is an SEO and GEO specialist focused on helping businesses improve their visibility across traditional search and AI-driven platforms.

Frequently asked questions

Not formally, but it is rapidly becoming a de facto standard for AI optimization. While `robots.txt` is the internet's established protocol for controlling *access* (crawling), `llms.txt` is a newer community proposal designed specifically to improve *comprehension* for Large Language Models. Think of `robots.txt` as the security guard at the door, and `llms.txt` as the tour guide showing the AI exactly where your most valuable content lives. While it is not yet an IETF standard, major AI developers are adopting it to make their scrapers more efficient.
No. The `llms.txt` file is exclusively for **content discovery**, not restriction. It acts as a curated "allow list" or map that highlights files you *want* AI to prioritize. If you need to prevent AI from scraping specific pages or directories, you must strictly use `robots.txt` with `User-agent` directives (like blocking `GPTBot` or `CCBot`) or implement server-side blocking. Relying on `llms.txt` to protect sensitive data is dangerous because it does not signal "do not crawl" to any bot.
You should update it whenever you significantly restructure your website or launch major new content pillars. Unlike an XML sitemap which might update automatically with every post, `llms.txt` is meant to be a concise, curated map of your highest-value information. A good rule of thumb is to review it: 1. When adding a new documentation section or product category. 2. If you change your URL structure (to prevent feeding 404s to the AI). 3. During quarterly audits to ensure the referenced markdown files still represent your best work.

Ready to optimize your site for AI search?

Discover how AI engines see your website and get actionable recommendations to improve your visibility.