
Web-to-markdown isn't new. But when you're feeding thousands of pages to LLMs daily, sloppy conversion bleeds tokens - and money. compress.new extracts only what matters. Try it and use it - it's free. Getting started is simple: just prepend `https://compress.new/` to any public URL you want to convert. You can control extraction and compression behavior using feature flags via query parameters (for example, enabling compression or targeting specific content). No setup, no SDK (yet!)see more
Founder
Screenshots

About
In the age of large language models, efficiency isn't just a nice-to-have; it's a necessity that directly impacts your budget and speed. If you are constantly scraping web content, research papers, or documentation to feed into your AI workflows, you know the pain of token bloat. Standard web-to-markdown conversions are often messy, bringing along unnecessary navigation bars, footers, ads, and excessive whitespace that chew up valuable tokens without providing any real intelligence to your LLM. That's where compress.new steps in to revolutionize how you prepare data for AI consumption. This isn't just another conversion tool; it’s a precision instrument designed to surgically extract the core, meaningful content from any public webpage, stripping away the digital clutter that costs you time and money. Imagine feeding your model clean, highly relevant text every single time, allowing you to process significantly more information within the same token limit, or achieve deeper insights from the same input size. It’s about maximizing the signal and eliminating the noise before it ever reaches your API call.
Getting started with this powerful utility is refreshingly straightforward, embracing a philosophy of zero friction for developers and power users alike. There is absolutely no complex setup, registration, or SDK installation required to begin seeing immediate results. To utilize compress.new, you simply take the public URL you wish to process and prepend our service address right before it, like magic: https://compress.new/yourtargeturl.com. This immediate accessibility means you can integrate this efficiency boost into your daily routine or complex pipelines instantly. Furthermore, for those who need granular control over the compression process—perhaps you need to exclude certain sections or fine-tune the level of abstraction—the system offers intuitive feature flags accessible via query parameters. This level of customizable extraction ensures that whether you are dealing with dense technical manuals or sprawling news articles, you get precisely the distilled markdown output your specific LLM task requires, making your interactions with AI faster, cheaper, and significantly smarter.
This focus on clean, token-optimized output transforms the economics of working with large language models, especially when processing vast quantities of external data daily. By drastically cutting down on wasted tokens from irrelevant formatting and boilerplate text, compress.new offers a tangible return on investment for any serious AI practitioner, researcher, or developer building scalable solutions. It allows you to focus your computational resources on the actual reasoning and generation tasks that matter, rather than paying the hidden tax of inefficient data preparation. Give it a try today; the simplicity of the URL prefix method combined with the powerful, intelligent compression engine means you can start saving tokens and gaining clarity instantly, all completely free for initial use.