Logo
Home
 
User Agents
New Agents
List All
My User Agent
Add New
 
User-Agents Database

User Agents

User Agent Date Added
Alexa/Internet Archiveedit8/02/2004 19:47:49
Alexa.com and archive.org (building the Internet Archive)
Araleedit10/02/2004 20:44:54
A java multithreaded web spider. Download entire web sites or specific resources from the web. Render dynamic sites to static pages.

Empty user agent
Archive.orgedit11/08/2010 13:59:27
ArchiveTeam ArchiveBotedit24/08/2014 10:37:48
Bloodhoundedit11/02/2004 0:16:23
Bloodhound will download an whole web site depending on the number of links to follow specified by the user.

Empty user agent
burglaredit12/09/2006 0:32:09
CCBotedit1/12/2008 1:14:23
Check&Getedit6/02/2004 0:15:29
Check&Get is handy and powerful bookmark manager and web monitoring program that lets you organize your browser bookmarks, check your favorite Internet pages and detect if their contenthas changed or has become unavailable.
collectedit12/09/2006 0:33:52
CommonCrawleredit14/12/2015 13:51:45
copieredit12/09/2006 0:34:06
Custoedit6/02/2004 21:45:59
Capable of reading HTML, CSS, JavaScript, and Shockwave Flash, Custo allows you to quickly retrieve information about the structure of a Web site.
CyotekWebCopyedit21/03/2018 7:47:59
DeWeb(c) Katalog/Indexedit11/02/2004 16:53:41
Its purpose is to generate a Resource Discovery database, perform mirroring, and generate statistics. Uses combination of Informix(tm) Database and WN 1.11 serversoftware for indexing/ressource discovery, fulltext search, text excerpts.
extractedit12/09/2006 0:34:51
FurlBotedit2/06/2006 0:37:14
Step 1 Sign up and add Furl to your browser
Step 2 Browse the Web and save any page with a single click
Step 3 Retrieve and share your pages easily
GetURLedit12/02/2004 20:46:59
Its purpose is to validate links, perform mirroring, and copy document trees. Designed as a tool for retrieving web pages in batch mode without the encumbrance of a browser. Can be used to describe a set of pages to fetch, and to maintain an archive or mirror. Is not run by a central site and accessed by clients - is run by the end user or archive maintainer
GrabNetedit7/09/2006 23:30:04
Grab snips of information from the World Wide Web -including images, text, and URLs - to help you reuse, and organize sites within a customized collection of folders on your desktop.
Heritrixedit8/02/2004 0:05:08
Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project.
HP Web PrintSmartedit21/02/2007 22:35:50
fell for bad bot trap (guestbook trap)
Utility from HP to capture and print web pages.
HTMLgobbleedit14/02/2004 0:25:26
A mirroring robot. Configured to stay within a directory, sleeps between requests, and the next version will use HEAD to check if the entire document needs to be retrieved.
HTTrackedit6/02/2004 21:51:04
HTTrack is a free (GPL, libre/open source) and easy-to-use offline browser utility.
IBM_Planetwideedit7/03/2004 23:48:26
Restricted to IBM owned or related domains.
Internet Explorer DigExtedit1/06/2006 23:20:56
is Internet Explorer's "Make Available Offline" feature. Also known as subscriptions.
iSiloXedit5/11/2005 23:55:37
iSiloX is the desktop application that converts content to the iSilo™ 3.x/4.x document format, enabling you to carry that content on your Palm OS® PDA, Pocket PC PDA, Windows® CE Handheld PC, or Windows® computer for viewing using iSilo™. It is currently available for Windows® and Mac OS/X. The X in the name iSiloX represents the "transformation" of content functionality provided by iSiloX.
JBot Java Web Robotedit8/03/2004 0:36:08
Java web crawler to download web sites
User agent can be changed by user
JoBo Java Web Robotedit8/03/2004 0:40:18
JoBo is a web site download tool. The core web spider can be used for any purpose.
User agent can be changed by user
JOC Web Spideredit6/02/2004 21:57:49
Download websites to your HD and navigate offline!
JoeBotedit8/03/2004 0:44:38
JoeBot is a generic web crawler implemented as a collection of Java classes which can be used in a variety of applications, including resource discovery, link validation, mirroring, etc. It currently limits itself to one visit per host per minute.
JPluckedit6/02/2004 21:34:04
JPluck converts web sites and RSS feeds to Plucker documents for offline reading on your handheld.
JRTS Check Favoritesedit7/02/2004 14:04:27
Check Favorites is a full-featured solution for maintaining all of the Internet-based links in your Favorites list (bookmarks). Check Favorites can check multiple links simultaneously and can optionally remove all of the broken links for you. Check Favorites also supports the ability to export your Favorites in a variety of formats, as well as the ability to extract, check, and export the links contained on any HTML page on your system, or accessible via the Internet (thus acting like a kind of link checker and ripper).
Konguloedit11/10/2006 0:54:49
A simple web spider that lets you keep copies of web sites in your Google Desktop Search index.
Kontiki Clientedit6/02/2004 22:00:30
memorybotedit20/05/2014 13:40:21
Mirror Checkingedit27/09/2005 11:49:46
MixnodeCacheedit23/03/2019 13:44:09
We create a copy of the web so that bots and crawlers come to us and not your website, dramatically reducing your non-human traffic and hosting costs.
Monsteredit25/07/2005 0:46:52
The Monster has two parts - Web searcher and Web analyzer. Searcher is intended to perform the list of WWW sites of desired domain (for example it can perform list of all WWW sites of mit.edu, com, org, etc... domain) In the User-agent field $TYPE is set to 'Mapper' for Web searcher and 'StAlone' for Web analyzer.
Mozilla/5.0edit15/06/2006 22:33:17
very aggressive bot (+5 requests/sec) + fell for bad bot trap when copying a client site.
Second encounter: went directly for url with "guestbook" in it.
MSIECrawleredit6/02/2004 0:33:24
To provide users with the best browsing experience, Microsoft® Internet Explorer 4.0 introduced offline browsing to the Microsoft Win32 platform. Internet Explorer 5 extends offline browsing, supporting "smarter" offline Favorites.
NavRoadedit8/09/2006 12:08:53
NavRoad HTML Viewer is a small, fast, powerful off-line HTML browser designed for viewing HTML and web image files (GIF, JPG, PNG, BMP) anytime, anywhere.
NearSiteedit8/09/2006 12:11:49
You can get more out of your Internet connection withNearSite. Keep your favourite Web pages and sites close at hand and up-to-date with Autobrowse - NearSite can automatically collect your Web browsing while you get on with other tasks, ready for you browse offline whenever you wish, wherever you are.
NetCarta WebMap Engineedit25/07/2005 0:59:30
The NetCarta WebMap Engine is a general purpose, commercial spider. Packaged with a full GUI in the CyberPilo Pro product, it acts as a personal spider to work with a browser to facilitiate context-based navigation. The WebMapper product uses the robot to manage a site (site copy, site diff, and extensive link management facilities). All versions can create publishable NetCarta WebMaps, which capture the crawled information. If the robot sees a published map, it will return the published map rather than continuing its crawl. Since this is a personal spider, it will be launched from multiple domains. This robot tends to focus on a particular site. No instance of the robot should have more than one outstanding request out to any given site at a time. The User-agent field contains a coded ID identifying the instance of the spider; specific users can be blocked via robots.txt using this ID.
NetSpideredit8/09/2006 12:18:39
The primary objective of NetSpider is to extract and display all the links and local references from a selected page and to allow the user to download them. All the extracted links to other pages can be processed further in the same manner. The program supports resume and has a facility for sites requiring user names and passwords. It is able to accept pages for processing and files for downloading from clipboard.
Offline Exploreredit6/02/2004 22:11:42
Download Web sites to your hard disk for offline browsing
Offline Navigatoredit7/09/2006 23:40:39
Pack Ratedit25/07/2005 1:13:56
Used for local maintenance and for gathering web pages so that local statisistical info can be used in artificial intelligence programs. Funded by NEMOnline.
pavukedit3/05/2006 0:43:52
Pavuk is a multifunctional open source web grabber with slow but continous development. This page informs about important news regarding pavuk (usually new releases).

Pavuk is a UNIX program used to mirror the contents of WWW documents or files. It transfers documents from HTTP, FTP, Gopher and optionally from HTTPS (HTTP over SSL) servers. Pavuk has an optional GUI based on the GTK2 widget set.
pcBrowseredit9/09/2006 0:10:59
pcBrowser is offline browsing at its finest, especially since it recognizes more than 40 fully tested filetypes!

With slideshow capability - integrated with a Windows Explorer appeal - pcBrowser is a primo program as an all-around multimedia player/image viewer, taking offline browsing a step higher.
PostFavoritesedit3/05/2006 1:26:28
Yahoo Search My Web
- Save what you like to build your own personal web
- "Re-find" pages instantly when you need them again
- Share your personal web
- Better than bookmarks
pufedit6/02/2004 22:43:32
puf is a download tool for UNIX-like systems. You may use it to download single files or to mirror entire servers.

Add new user agent

User Agents - Search

Enter keyword or user agent: