Logo
Home
 
User Agents
New Agents
List All
My User Agent
Add New
 
User-Agents Database

User Agents

User Agent Date Added
Fluffy the spideredit8/02/2004 15:44:40
Forest Conservation Spideredit8/02/2004 15:45:28
Fouineuredit11/02/2004 23:14:57
This robot build automaticaly a database that is used by our own search engine. This robot auto-detect the language (french, english & spanish) used in the HTML page. Each database record generated by this robot include: date, url, title, total words, title, size and de-htmlized text. Also support server-side and client-side IMAGEMAP.
Francisedit8/02/2004 15:46:00
Freecrawledit11/02/2004 23:18:19
The Freecrawl robot is used to build a database for the EuroSeek service.
FreeFindedit8/02/2004 15:47:08
Let your visitors Search Your Website
FunnelWebedit11/02/2004 23:19:50
Its purpose is to generate a Resource Discovery database, and generate statistics. Localised South Pacific Discovery and Search Engine, plus distributed operation under development.
FusionBotedit8/02/2004 20:03:26
Galaxy uses LOGIKA Corporation's FusionBot web-indexing crawler. It collects documents from the web to build a searchable index for the Galaxy Directory, Web Search, and Vertical Search Engines. Visit our FusionBot website to see some of FusionBot's other featured customers (www.fusionbot.com).
Gaisbotedit8/02/2004 15:48:47
Gaisbot is the agent software of GAIS which crawls web sites all over the world, in order to build a search engine like google or altavista.
GalaxyBotedit8/02/2004 15:50:28
gazzedit12/02/2004 20:41:32
This robot is used for research purposes (NTT Cyberspace Laboratories). Its root is TITAN project in NTT.
GeonaBotedit8/02/2004 15:51:29
GetBotedit12/02/2004 20:44:16
GetBot's purpose is to index all the sites it can find that contain Shockwave movies. It is the first bot or spider written in Shockwave. The bot was originally written at Macromedia on a hungover Sunday as a proof of concept. - Alex Zavatone 3/29/96
Gigabotedit8/02/2004 15:52:44
Gigabot is the name of Gigablast's indexing agent, also known as a spider. Gigabot is like a thousand internet users busily surfing the web. But it moves from page to page indexing the content it finds.

Note: User agent changed to Gigabot/2.0 (http://www.gigablast.com/spider.html) on 10/10/2006 after I mailed them to say that their user agent was not RFC compliant. In 2007 they changed to Gigabot/2.0att which was again not RFC compliant.
Girafabotedit8/02/2004 15:54:09
Girafa is a FREE web navigation service that works alongside your browser providing you with visualization capabilities when searching and navigating the web
goliatspideredit8/02/2004 19:43:11
Googlebotedit8/02/2004 15:59:50
Google's web-crawling robot
Googlebot-Imageedit8/04/2004 22:35:33
Griffonedit12/02/2004 20:54:23
The Griffon robot is used to build database for the OCN navi search service operated by NTT Communications Corporation. It mainly gathers pages written in Japanese. Its root is TITAN project in NTT.
Gromitedit12/02/2004 20:59:32
Gromit is a Targetted Web Spider that indexes legal sites contained in the AustLII legal links database.
GrubNGedit3/11/2009 23:07:23
Gulper Botedit13/02/2004 0:54:21
The Gulper Bot is used to collect data for the Yuntis research search engine project.
GurujiBotedit2/08/2007 23:10:48
GurujiBot is the user-agent for Guruji's web crawler. Guruji is crawling the web to build a Next Generation Search Engine.
HamBotedit13/02/2004 0:56:27
Two HamBot robots are used (stand alone & browser based) to aid in building the database for HamRad Search - The Search Engine for Search Engines. The robota are run intermittently and perform nearly identical functions.
Harvestedit13/02/2004 0:58:49
Harvest's motivation is to index community- or topic- specific collections, rather than to locate and index all HTML objects that can be found. Also, Harvest allows users to control the enumeration several ways, including stop lists and depth and count limits. Therefore, Harvest provides a much more controlled way of indexing the Web than is typical of robots. Pauses 1 second between requests (by default).
HelpSpyedit8/02/2004 16:04:35
HenryTheMiragoRobotedit5/09/2005 15:32:34
HI (HTML Index) Searchedit14/02/2004 0:17:33
Its purpose is to generate a Resource Discovery database. This Robot traverses the net and creates a searchable database of Web pages. It stores the title string of the HTML document and the absolute url. A search engine provides the boolean AND & OR query models with or without filtering the stop list of words. Feature is kept for the Web page owners to add the url to the searchable database.
Homerbotedit8/02/2004 16:06:31
Hometown Spider Proedit14/02/2004 0:19:12
The Hometown Spider Pro is used to maintain the indexes for Hometown Singles.
HonesoSearchEngineedit23/10/2017 13:38:59
I Robotedit8/03/2004 0:17:23
I Robot is used to build a fresh database for the emulation community. Primary focus is information on emulation and especially old arcade machines. Primarily english sites will be indexed and only if they have their own domain. Sites are added manually on based on submitions after they has been evaluated.
iajaBotedit14/02/2004 0:28:20
Finds adult content
IceCatedit13/12/2009 13:13:18
icsbotedit8/02/2004 19:49:11
iltrovatore-setaccioedit21/02/2004 2:34:04
image.kapsi.netedit8/03/2004 0:50:22
The image.kapsi.net robot is used to build the database for the image.kapsi.net search service. The robot runs currently in a random times.
The Robot was build for image.kapsi.net's database in year 2001.
infomine.ucr.eduedit8/02/2004 19:49:47
Scholarly Internet Resource Collections
InfoSeek Robotedit8/03/2004 0:08:00
Its purpose is to generate a Resource Discovery database. Collects WWW pages for both InfoSeek's free WWW search and commercial search. Uses a unique proprietary algorithm to identify the most popular and interesting WWW pages. Very fast, but never has more than one request per site outstanding at any given time. Has been refined for more than a year.
Infoseek Sidewinderedit10/02/2004 1:26:05
Collects WWW pages for both InfoSeek's free WWW search services. Uses a unique, incremental, very fast proprietary algorithm to find WWW pages.
InfoTigerBotedit16/06/2021 11:36:07
Ingridedit7/03/2004 23:52:52
Commercial as part of search engine package
Internet Cruiser Robotedit11/02/2004 16:41:14
Internet Cruiser Robot is Internet Cruiser's prime index agent.
Internet Ninjaedit8/09/2006 0:52:44
Dream Train Internet
Iron33edit8/03/2004 0:30:03
The robot "Iron33" is used to build the database for the WWW search engine "Verno".
Israeli-searchedit8/03/2004 0:31:37
A complete software designed to collect information in a distributed workload and supports context queries. Intended to be a complete updated resource for Israeli sites and information related to Israel or Israeli Society.
Jabse.com Crawleredit10/03/2014 9:31:26
Jayde crawleredit20/12/2012 16:17:07
B2B Search Engine
JCrawleredit8/03/2004 0:37:42
JCrawler is currently used to build the Vietnam topic specific WWW index for VietGATE . It schedules visits randomly, but will not visit a site more than once every two minutes. It uses a subject matter relevance pruning algorithm to determine what pages to crawl and index and will not generally index pages with no Vietnam related content. Uses Unicode internally, and detects and converts several different Vietnamese character encodings.
Jobotedit8/03/2004 0:43:30
Its purpose is to generate a Resource Discovery database. Intended to seek out sites of potential "career interest". Hence - Job Robot.

Add new user agent

User Agents - Search

Enter keyword or user agent: