A small program, often written in Java, which usually runs in a web browser, as part of a web page. It is possible that the use of such a program may cause spiders and robots to stop indexing a page.
Common Gateway Interface - a standard interface between web server software and other programs running on the same machine.
Strictly, any program which handles its input and output data according to the CGI standard. In practice, CGI programs are used to handle forms and database queries on web pages, and to produce non-static web page content.
A computer, program or process which makes requests for information from another computer, program or process. Web browsers are client programs. Search engine spiders are (or can be said to behave as) clients.
The process of clicking on a link in a search engine output page to visit an indexed site. This is an important link in the process of receiving visitors to a site via search engines. Good ranking may be useless if visitors do not click on the link which leads to the indexed site. The secret here is to provide a good descriptive title and an accurate and interesting description.
The typography, composition, content links of a website; what the user interfaces with on their monitor; the look and feel of a website; the sum of all elements of a website is generally considered content.
An internet link which doesn't lead to a page or site, probably because the server is down or the page has moved or no longer exists. Most search engines have techniques for removing such pages from their listings automatically, but as the internet continues to increase in size, it becomes more and more difficult for a search engine to check all the pages in the index regularly. Reporting of dead links helps to keep the indexes clean and accurate, and this can usually be done by submitting the dead link to the search engine.
Descriptive text associated with a web page and displayed, usually with the page title and URL, when the page appears in a list of pages generated by a search engine or directory as a result of a query. Some search engines take this description from the DESCRIPTION Meta tag - others generate their own from the text in the page. Directories often use text provided at registration.
A server or a collection of servers dedicated to indexing internet web pages and returning lists of pages which match particular queries. Directories (also known as Indexes) are normally compiled manually, by user submission and often involve an editorial selection and/or categorization process.
A sub-set of internet addresses. Domains are hierarchical, and lower-level domains often refer to particular web sites within a top-level domain. The most significant part of the address comes at the end - typical top-level domains are .com, .edu, .gov, .org (which sub-divide addresses into areas of use). There are also various geographic top-level domains (e.g. .nz, .ca, .fr, .ro etc.) referring to particular countries.
Information on web pages which changes or is changed automatically, e.g. based on database content or user information. Sometimes it's possible to spot that this technique is being used, e.g. if the URL ends with .asp, .cfm, .cgi or .shtml. It is possible to serve dynamic content using standard (normally static) .htm or .html type pages, though. Search engines will currently index dynamic content in a similar fashion to static content, although they will not usually index URLs which contain the ? character.