You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/creating-custom-feeds.mdx
+2-10Lines changed: 2 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,14 +11,6 @@ When auto-sourcing isn't enough, you can write your own configuration files to c
11
11
12
12
**Prerequisites:** You should be familiar with the [Getting Started](/getting-started) guide before diving into custom configurations.
13
13
14
-
<Asidetype="note"title="Release note">
15
-
This guide tracks the current documentation tree and may describe features that have not yet shipped in the
16
-
latest released `html2rss` gem. If you want the newest integrated behavior, prefer running
17
-
[`html2rss-web`](/web-application/getting-started) via Docker. The web application ships as a rolling
18
-
release and usually reflects the latest development state of the gem first. See [Versioning and
19
-
releases](/web-application/reference/versioning-and-releases/) for details.
20
-
</Aside>
21
-
22
14
<Asidetype="tip"title="Use this guide when you need more control">
23
15
Start with included feeds first. If your site is not covered, try [automatic feed
24
16
generation](/web-application/how-to/use-automatic-feed-generation/) next. Reach for a custom config when you
@@ -48,7 +40,7 @@ When auto-sourcing isn't enough, you can write your own configuration files to c
48
40
3.**Validate the config** with `html2rss validate your-config.yml`
49
41
4.**Render the feed** with `html2rss feed your-config.yml`
50
42
5.**Add it to `html2rss-web`** so you can use it through your normal instance
51
-
6.**Escalate strategy when needed**: if static fetching is insufficient, switch to a JavaScript/browser-based extraction strategy
43
+
6.**Escalate request strategy when needed**: use a browser-based rendering strategy only when troubleshooting requires it
52
44
53
45
This order keeps iteration fast and makes it easier to see whether the problem is the page structure, your
54
46
selectors, or the fetch strategy.
@@ -210,7 +202,7 @@ there.
210
202
-**No items found?** Check your selectors with browser tools (F12) - the `items.selector` might not match the page structure
211
203
-**Invalid YAML?** Use spaces, not tabs, and ensure proper indentation
212
204
-**Website not loading?** Check the URL and try accessing it in your browser
213
-
-**Missing content?**Some websites load content with JavaScript - you may need a JavaScript/browser-based extraction strategy instead of plain HTTP fetching
205
+
-**Missing content?**Try a browser-based rendering strategy during troubleshooting
214
206
-**Wrong data extracted?** Verify your selectors are pointing to the right elements
215
207
216
208
**Need more help?** See our [comprehensive troubleshooting guide](/troubleshooting/troubleshooting) or ask in [GitHub Discussions](https://github.com/orgs/html2rss/discussions).
BOTASAURUS_SCRAPER_URL="http://localhost:4010" html2rss auto https://example.com/protected --strategy botasaurus ; \
28
27
html2rss auto https://example.com/articles --items_selector ".post-card"
29
28
`}
30
29
lang="bash"
31
30
/>
32
31
33
32
Command: `html2rss auto URL`
34
33
35
-
Default behavior uses`--strategy auto`, which tries `faraday` then `botasaurus` then `browserless`.
34
+
Default behavior is`--strategy auto`, which tries `faraday` then `botasaurus` then `browserless`.
36
35
37
36
#### URL Surface Guidance For `auto`
38
37
@@ -52,25 +51,29 @@ When possible, pass a direct listing/update URL instead of a top-level homepage
52
51
53
52
#### Failure Outcomes You Should Expect
54
53
55
-
When no extractable items are found, `auto`now classifies likely causes instead of only returning a generic message:
54
+
When no extractable items are found, `auto` classifies likely causes instead of only returning a generic message:
56
55
57
56
-`blocked surface likely (anti-bot or interstitial)`:
58
-
- retry with `--strategy browserless`
59
57
- try a more specific public listing URL
60
58
-`app-shell surface detected`:
61
-
- retry with `--strategy browserless`
62
59
- switch to a direct listing/update URL
63
60
-`unsupported extraction surface for auto mode`:
64
61
- switch to listing/changelog/category URLs
65
62
- use explicit selectors in a feed config
66
63
67
64
Known anti-bot interstitial responses (for example Cloudflare challenge pages) are surfaced explicitly as blocked-surface errors.
68
65
69
-
If you run with the default `--strategy auto`, no manual strategy override is required for fallback ordering.
66
+
If all fallback tiers run but still extract zero items, html2rss raises:
67
+
68
+
-`No RSS feed items extracted after auto fallback ...`
69
+
70
+
If failures continue after URL/surface fixes, retry with an explicit browser-based override (`--strategy browserless`), or `--strategy botasaurus` when `BOTASAURUS_SCRAPER_URL` is configured.
71
+
72
+
Start by changing the input URL to a direct listing/update page, then move to explicit selectors if needed.
70
73
71
74
#### Browserless Setup And Diagnostics (CLI)
72
75
73
-
`browserless` is opt-in for CLI usage.
76
+
`browserless` is an explicit override for CLI usage.
74
77
75
78
<Code
76
79
code={`
@@ -104,7 +107,7 @@ For custom Browserless endpoints, `BROWSERLESS_IO_API_TOKEN` is required.
104
107
105
108
#### Botasaurus Environment Requirement (CLI)
106
109
107
-
`botasaurus` is opt-in for CLI usage and requires `BOTASAURUS_SCRAPER_URL`:
110
+
`botasaurus` is an explicit override for CLI usage and requires `BOTASAURUS_SCRAPER_URL`:
108
111
109
112
<Code
110
113
code={`
@@ -128,6 +131,7 @@ Loads a YAML config, builds the feed, and prints the RSS XML to stdout.
Copy file name to clipboardExpand all lines: src/content/docs/ruby-gem/reference/strategy.mdx
+10-5Lines changed: 10 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,26 @@
1
1
---
2
2
title: Strategy
3
-
description: "Learn about different strategies for fetching website content with html2rss. Choose between faraday, browserless, and botasaurus strategies for optimal performance."
3
+
description: "Learn how html2rss chooses request strategies by default with auto fallback, and when to override with faraday, botasaurus, or browserless."
-**`faraday`**: Makes a direct HTTP request. It is fast but does not execute JavaScript.
11
12
-**`browserless`**: Renders the website in a headless Chrome browser, which is necessary for JavaScript-heavy sites.
12
13
-**`botasaurus`**: Delegates fetching to a Botasaurus scrape API. This is opt-in and requires `BOTASAURUS_SCRAPER_URL`.
13
14
14
15
`strategy` is a top-level config key. Request-specific controls live under `request`.
15
16
16
-
If you use CLI `--strategy auto` (default), html2rss tries `faraday` then `botasaurus` then `browserless`.
17
+
`auto` falls back to the next strategy when the current attempt errors or extracts zero items. Use explicit `--strategy ...` only when you need to force a specific transport for troubleshooting or reproducibility.
17
18
18
-
Use `faraday` first for direct newsroom/listing/changelog pages. Prefer `botasaurus` as the first explicit browser-based strategy when you have a Botasaurus scrape API. Use `browserless` when you specifically need Browserless preload actions.
19
+
## `auto` (default)
20
+
21
+
The default strategy chain is:
22
+
23
+
`faraday` -> `botasaurus` -> `browserless`
19
24
20
25
## `browserless`
21
26
@@ -65,7 +70,7 @@ Set the `strategy` at the top level of your feed configuration and put request c
65
70
66
71
Use this split consistently:
67
72
68
-
-`strategy`: selects `faraday`, `browserless`, or `botasaurus`
73
+
-`strategy`: selects `auto`, `faraday`, `browserless`, or `botasaurus`
69
74
-`headers`: top-level headers shared by all strategies
70
75
-`request.max_redirects`: redirect limit for the request session
71
76
-`request.max_requests`: total request budget for the whole feed build
Copy file name to clipboardExpand all lines: src/content/docs/troubleshooting/troubleshooting.mdx
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,34 +32,34 @@ The `auto` flow is URL-surface sensitive.
32
32
33
33
If extraction quality is poor, switch to a more specific listing/update URL before tuning selectors.
34
34
35
-
If you use CLI defaults, `--strategy auto` is already active and attempts `faraday` then `botasaurus` then `browserless`.
36
-
37
35
### Empty Feeds
38
36
39
37
If your feed is empty, check the following:
40
38
41
39
-**URL:** Ensure the `url` in your configuration is correct and accessible.
42
40
-**`items.selector`:** Verify that the `items.selector` matches the elements on the page.
43
41
-**Website Changes:** Websites change their HTML structure frequently. Your selectors may be outdated.
44
-
-**JavaScript Content:** If the content is loaded via JavaScript, move from `faraday` to a rendering strategy such as `browserless` (or `botasaurus` when you use a Botasaurus scrape API).
42
+
-**JavaScript Content:** If the content is loaded via JavaScript, use a browser-based rendering strategy.
45
43
-**Authentication:** Some sites require authentication — check if you need to add headers or use a different strategy.
46
44
47
45
### `No scrapers found` Failure Taxonomy (`auto`)
48
46
49
47
`auto` classifies no-scraper failures with actionable hints:
50
48
51
49
-**Blocked surface likely (anti-bot or interstitial):**
52
-
- retry with `--strategy browserless`
53
50
- try a more specific public listing URL
54
51
-**App-shell surface detected:**
55
-
- retry with `--strategy browserless`
56
52
- target a direct listing/update page instead of homepage/shell entrypoint
57
53
-**Unsupported extraction surface for auto mode:**
58
54
- switch to listing/changelog/category URLs
59
55
- or use explicit selectors in YAML config
60
56
61
57
Known anti-bot interstitial patterns (for example Cloudflare challenge pages) are surfaced as blocked-surface errors instead of silent empty extraction results.
62
58
59
+
When all auto fallback tiers complete but still extract zero items, html2rss raises `No RSS feed items extracted after auto fallback ...`.
60
+
61
+
If failures continue after URL/surface fixes, retry with an explicit browser-based override (`--strategy browserless`), or `--strategy botasaurus` when `BOTASAURUS_SCRAPER_URL` is configured.
62
+
63
63
### Browserless Connection / Setup Failures
64
64
65
65
If you receive `Browserless connection failed (...)`:
@@ -93,7 +93,7 @@ For custom websocket endpoints, `BROWSERLESS_IO_API_TOKEN` is required.
93
93
Common configuration-related errors:
94
94
95
95
-**`UnsupportedResponseContentType`:** The website returned content that html2rss can't parse (not HTML or JSON).
96
-
-**`UnsupportedStrategy`:** The specified strategy is not available. Use `faraday`, `browserless`, or `botasaurus`.
96
+
-**`UnsupportedStrategy`:** The specified strategy is not available. Use `auto`, `faraday`, `browserless`, or `botasaurus`.
97
97
-**`BOTASAURUS_SCRAPER_URL is required for strategy=botasaurus.`:** Set `BOTASAURUS_SCRAPER_URL` to your Botasaurus scrape API base URL when using `--strategy botasaurus`.
98
98
-**`BOTASAURUS_SCRAPER_URL is invalid`:** Fix the URL format and retry.
99
99
-**`Configuration must include at least 'selectors' or 'auto_source'`:** You need to specify either manual selectors or enable auto-source.
Copy file name to clipboardExpand all lines: src/content/docs/web-application/how-to/use-automatic-feed-generation.mdx
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Then restart the stack:
42
42
1. Open your instance at `http://localhost:4000`
43
43
2. Paste a page URL into `Create a feed`
44
44
3. Add a valid access token when prompted
45
-
4.Choose a strategy if needed, then submit
45
+
4.Submit the request
46
46
5. Copy the generated feed URL or open it directly
47
47
48
48
## What Success Looks Like
@@ -59,23 +59,21 @@ That is enough to confirm the self-hosted flow is working.
59
59
60
60
## Strategy Behavior
61
61
62
-
-`faraday` is the default strategy and should be your first try for most pages.
63
-
- During the feed-creation API request (`POST /api/v1/feeds`) from the web UI, a `faraday` submission may be retried once with `browserless` when the first failure looks retryable.
64
-
- If that fallback attempt fails, or if the first failure is clearly auth/URL/unsupported-strategy related, the UI stops and shows an error.
65
-
- This retry behavior is scoped to feed creation. It is not a general retry layer for later feed rendering (`GET /api/v1/feeds/:token`) or preview loading.
62
+
- Feed creation uses the backend default strategy behavior.
63
+
- If feed creation fails, the UI surfaces structured retry/error guidance rather than exposing low-level strategy controls.
66
64
67
65
## Input URL Guidance (Quality First)
68
66
69
67
Automatic generation is most successful when the input URL is already a listing/update surface.
70
68
71
69
- Higher-success inputs:
72
-
- newsroom/press listing pages
73
-
- category/tag/archive/listing pages
74
-
- changelog/release/update pages
70
+
- newsroom/press listing pages
71
+
- category/tag/archive/listing pages
72
+
- changelog/release/update pages
75
73
- Lower-success inputs:
76
-
- generic homepages
77
-
- search pages
78
-
- app-shell entrypoints (client-rendered shells)
74
+
- generic homepages
75
+
- search pages
76
+
- app-shell entrypoints (client-rendered shells)
79
77
80
78
If output quality is poor, switch the input to a direct listing/update URL before assuming the feature is broken.
0 commit comments