You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,6 +21,25 @@ A Model Context Protocol (MCP) server implementation that integrates with [Firec
21
21
- Automatic retries and rate limiting
22
22
- Cloud and self-hosted support
23
23
- SSE support
24
+
-**Context limit support for MCP compatibility**
25
+
26
+
## Context Limiting for MCP
27
+
28
+
All tools now support the `maxResponseSize` parameter to limit response size for better MCP compatibility. This is especially useful for large responses that may exceed MCP context limits.
29
+
30
+
**Example Usage:**
31
+
```json
32
+
{
33
+
"name": "firecrawl_scrape",
34
+
"arguments": {
35
+
"url": "https://example.com",
36
+
"formats": ["markdown"],
37
+
"maxResponseSize": 50000
38
+
}
39
+
}
40
+
```
41
+
42
+
When the response exceeds the specified limit, content will be truncated with a clear message indicating truncation occurred. This parameter is optional and preserves full backward compatibility.
24
43
25
44
> Play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers).
Copy file name to clipboardExpand all lines: package.json
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
{
2
2
"name": "firecrawl-mcp",
3
-
"version": "3.3.4",
3
+
"version": "3.3.5",
4
4
"description": "MCP server for Firecrawl web scraping integration. Supports both cloud and self-hosted instances. Features include web scraping, search, batch processing, structured data extraction, and LLM-powered content analysis.",
Scrape content from a single URL with advanced options.
245
+
Scrape content from a single URL with advanced options.
238
246
This is the most powerful, fastest and most reliable scraper tool, if available you should always default to using this tool for any web scraping needs.
239
247
240
248
**Best for:** Single page content extraction, when you know exactly which page contains the information.
@@ -248,11 +256,13 @@ This is the most powerful, fastest and most reliable scraper tool, if available
248
256
"arguments": {
249
257
"url": "https://example.com",
250
258
"formats": ["markdown"],
251
-
"maxAge": 172800000
259
+
"maxAge": 172800000,
260
+
"maxResponseSize": 50000
252
261
}
253
262
}
254
263
\`\`\`
255
264
**Performance:** Add maxAge parameter for 500% faster scrapes using cached data.
265
+
**Context Limiting:** Use maxResponseSize parameter to limit response size for MCP compatibility (e.g., 50000 characters).
256
266
**Returns:** Markdown, HTML, or other formats as specified.
257
267
${SAFE_MODE ? '**Safe Mode:** Read-only content extraction. Interactive actions (click, write, executeJavascript) are disabled for security.' : ''}
@@ -278,13 +288,15 @@ Map a website to discover all indexed URLs on the site.
278
288
**Best for:** Discovering URLs on a website before deciding what to scrape; finding specific sections of a website.
279
289
**Not recommended for:** When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping).
280
290
**Common mistakes:** Using crawl to discover URLs instead of map.
291
+
**Context Limiting:** Use maxResponseSize parameter to limit response size for MCP compatibility.
281
292
**Prompt Example:** "List all URLs on example.com."
282
293
**Usage Example:**
283
294
\`\`\`json
284
295
{
285
296
"name": "firecrawl_map",
286
297
"arguments": {
287
-
"url": "https://example.com"
298
+
"url": "https://example.com",
299
+
"maxResponseSize": 50000
288
300
}
289
301
}
290
302
\`\`\`
@@ -297,17 +309,18 @@ Map a website to discover all indexed URLs on the site.
0 commit comments