Skip to main content
Back to Journal
user@argobox:~/journal/2026-02-21-applyr-standalone
$ cat entry.md

Applyr Standalone: When Internal Tools Become Products

○ NOT REVIEWED

Applyr Standalone: When Internal Tools Become Products

The Problem Space

The job market is broken. Not in the "economy is bad" sense—broken in the "applying to jobs is mechanically terrible" sense.

Every platform has a different resume format. LinkedIn wants one layout. Indeed wants another. Tailored applications get response rates 3-4x higher than template blasts, but tailoring 50 applications by hand takes 8-10 hours. Most people don't have that time.

So they blast templates. Recruiters see templates. Templates get rejected.

I'd been running a small internal automation system for a while—mostly for personal use, testing ideas around job market efficiency. It worked well enough that I started thinking: what if this was a product?

The Extraction

February 2026. I spent a week pulling the system apart.

The internal version was tightly integrated with ArgoBox infrastructure. Task scheduling through our OpenClaw agent. Database lives on the Unraid. Resume storage scattered across the personal vault.

To extract it into a standalone package, I had to:

  1. Decouple from infrastructure — Pull out the job-fetching logic (which sites to scrape, how to paginate, rate limiting) into a standalone pipeline. No ArgoBox-specific assumptions.

  2. Abstract the resume matching — Built a modular system where you can plug in different resume versions, have the system select which one makes sense for a given job description, and generate a tailored application.

  3. Add evidence capture — This was critical. The system logs:

    • Which resume was selected and why
    • What changes were made to tailor the application
    • Screenshots of the application state
    • Timestamps for everything

    If there's ever a question about what happened, the evidence is there. This alone prevents a lot of legal/compliance headaches that plague job automation systems.

  4. Multi-platform support — Abstracted the platform layer so it works with LinkedIn, Indeed, and (with community contribution) any other job board.

The Architecture Decision

Early on, I had to choose: build a browser automation system (Selenium/Playwright) or work at the platform API level?

Browser automation is simpler to reason about—you see what the user sees. But it's fragile (UI changes break automation) and it gets flagged as bot behavior more easily.

API-level is more robust but requires reverse-engineering each platform's undocumented APIs.

I split the difference: platform-specific modules that prefer APIs when available, fall back to browser automation for platforms that don't expose an API. Each module is tested independently, so if one platform changes their API, it doesn't cascade to the others.

The Compliance Layer

This part I spent the most time on, because it's where automation systems get shut down.

Job platforms explicitly forbid account automation in their ToS. Do they have a technical reason? Sometimes. Do they enforce it uniformly? No. Do they err on the side of caution? Yes.

The compliance layer:

  • Rate limiting per platform — LinkedIn has stricter limits than Indeed. The system knows this and adapts.
  • Session rotation — Doesn't apply to everything in one session. Spreads actions over time.
  • Behavioral authenticity — The system doesn't do 50 applications in 2 minutes. It spaces them out, adds random delays, simulates human behavior patterns.
  • Audit trail — Every action is logged with context. If the platform support team asks "what did you do," the evidence is there and it's clear we were compliant.

I'm not making excuses for the platforms' ToS. They're overly broad and they don't distinguish between good automation and bad. But if you're automating something on their infrastructure, you need to understand their enforcement stance and design defensively.

The Evidence System

This is the part I'm most proud of.

Most job automation tools either:

  • Don't log anything (easy to break, hard to prove you didn't cheat)
  • Log too much (storing full page HTML, screenshots of everything, millions of logs)
  • Log insecurely (plaintext credentials, unencrypted storage)

Applyr logs:

  • Decision points (why was this resume selected?)
  • State transitions (application started → form filled → submitted)
  • Metadata (timestamp, platform, job ID, user agent, IP if applicable)
  • Structured evidence (JSON records that can be queried, analyzed, audited)
  • Nothing sensitive (no passwords, no personal data beyond what's needed to prove compliance)

If there's ever a dispute—"did you really apply to this job or did your bot apply?"—the evidence file answers it.

Why This Matters

Job automation gets killed because:

  1. Platforms are tired of bot abuse
  2. Users don't have evidence of what actually happened
  3. Security teams see "automation" and assume "malicious"

If you can prove:

  • You actually applied to legitimate jobs
  • You tailored each application
  • You spaced them out per platform limits
  • You weren't doing anything deceptive

...you've moved from "suspicious automation" to "automated compliance."

The Python Package Structure

applyr/
├── core/           # Resume selection, tailoring logic
├── platforms/      # LinkedIn module, Indeed module, extensible
├── evidence/       # Logging, audit trail, export
├── scheduler/      # Rate limiting, session management
├── cli/            # Command-line interface
└── tests/          # Comprehensive test suite

Each piece is independently testable. The platforms are plugin-based, so community contributions don't touch core logic.

What's Next

The initial release focuses on LinkedIn + Indeed. The architecture is designed so that adding a new platform is:

  • Define the platform-specific API/selectors
  • Implement the evidence capture for that platform
  • Add tests
  • Contribute back

The hard part (resume matching, tailoring logic, evidence system) is already done. Adding Glassdoor or Angel List becomes a straightforward module.

The Honest Part

This tool exists because I got tired of the job market's friction. I built it for myself. It works.

But I also know that job platforms will, at some point, either:

  • Ban it outright
  • Update their ToS to make it unambiguously illegal
  • Sue the maintainers (less likely, but possible)

The evidence system exists because I want to be defensible if that happens. "We were compliant by your own standards" doesn't win in court, but it helps with public credibility.

Why Open Source

I could keep this private. Use it myself, sell it, keep it as a competitive advantage.

But the honest thing is: if you're automating job applications, the tools and decisions should be transparent. Closed-source automation tools are the ones that get banned. Open-source tools that follow platform ToS openly get more respect from the platforms themselves.

Plus, the job market is broken enough that distributing a working solution is more valuable than hoarding it. If Applyr helps 1,000 people get interviews they wouldn't have otherwise gotten, that's a win.

The Realization

About halfway through extracting the system into a package, I realized something: every internal tool that works well enough to extract is a product waiting to happen.

ArgoBox started as internal tooling. So did the build swarm. So did Argonaut RAG.

The question isn't "is this good enough to be a product?" It's "have you already proven it works by using it yourself?"

If yes, extract it. Open source it. Let others solve the same problem you already solved.

Applyr is that for job automation. Multi-platform, compliant, auditable, extensible.

The job market is still broken. But at least now there's tooling to deal with it.


Status: v0.1.0 released to npm as @applyr/core. LinkedIn + Indeed modules stable. Community contributions welcome. Security/compliance audit planned before v1.0.