HTTP: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pat Palmer
imported>Pat Palmer
Line 78: Line 78:


===Request methods===
===Request methods===
Clients can use one of eight request methods:
HTTP clients can use one of eight request methods:
* HEAD
* HEAD
* GET
* GET
Line 88: Line 88:
* CONNECT
* CONNECT


Typically, only GET, HEAD and POST methods are used in web applications, although protocols like [[WebDAV]] make use of others.
In practice, it is mainly the GET, POST and HEAD methods that are used in web applications, although protocols like [[WebDAV]] make use of others.


===Status codes===
===Status codes===

Revision as of 15:20, 12 June 2011

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

HTTP (the Hypertext Transfer Protocol) is an ASCII-text, client-server protocol which is the most prevalent messaging standard used by the World Wide Web, in which an HTTP client program (web browser) sends an ASCII-text HTTP request message to an HTTP server program (web server), and the web server sends back an ASCII-text HTTP response in reply. The payload sent as part of HTTP requests and responses is also plain text, but by using special encodings, the payload can actually be binary information such as image files (though they must be encoded as ASCII text for transport by HTTP and then decoded back to their binary format on the other end). The same HTTP protocol used by web browsers is also used by search engines to index the World Wide Web, as well as by so-called spam-bots which scrape web pages to obtain information for malicious purposes.

Two versions of HTTP may still be used today, although most web servers implement the latter for efficiency reasons:

  • W3C RFC 1945, Hypertext Transfer Protocol -- HTTP/1.0[1], which specifies how the browser and server communicate with each other
  • W3C RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1[2], which adds caching and keep-alive connections to the original specification

HTTP is one of several well-known TCP applications which can ride on top of the internet's Transmission Control Protocol, and HTTP is assigned the well-known TCP port number of 80. This is important, though perhaps not obvious, because browsing to a URL such as http://diatom.ansp.org/ is actually equivalent to browsing to http://diatom.ansp.org:80/. If two web server programs were to execute simultaneously on a single host computer, it would be necessary for one of them to be configured to use a less-known TCP port number above 8000 (since well-known TCP ports are supposed to be below 8000). A URL such as http://diatom.ansp.org:8080/ could be used to address an HTTP request to the second web server on a computer whose DNS name is diatom.ansp.org, assuming the second HTTP server software were configured to listen on TCP port 8080. If two web servers both run on one computer and both attempt to listen for requests on TCP port 80 (the HTTP default), neither will work correctly.

HTTP's original purpose was the transfer of Hypertext Markup Language and other page description methods such as cascading style sheets (CSS). HTTP is a relatively simple protocol, which relies on the Transmission Control Protocol to ensure its traffic is carried, free from errors, over Internet Protocol networks. It works in the same manner if the users or servers are connected to the public Internet, an intranet, or an extranet. HTTP needs to be supplemented to provide security of the message transfer.[3]

The World Wide Web is more than HTML and HTTP alone. It includes a wide range of administrative techniques, performance-enhancing methods such as web caches and content distribution networks, and and has a robust caching system.

History

HTTP was created at CERN by Tim Berners-Lee in 1989 as a way to share hypertext documents.[4] Around 1992 with the availability of the first web browser (Mosaic), HTTP and HTML together began to be used by other sites, primarily in the scientific world. The availability of the Mosaic web browser and the NCSA HTTPd web server, both developed at the National Center for Supercomputing Applications by Marc Andreessen, were key to the explosion in popularity of HTTP and HTML that followed.

The first (1990) version of HTTP, called HTTP/0.9, was a bare-bones protocol for raw data transfer across the Internet. HTTP/1.0 (1996) improved the protocol by allowing payload message to be in a self-describing language HTML, along with the addition of metadata about the location of the requested information and other directives on how to handle the request and response.

However, HTTP/1.0 was still limited, lacking [[proxy (computer}|proxies]], web caches, persistent connections for repeated messaging, and virtual web servers. To meet those needs, HTTP/1.1 was developed.

Technical details

The HTTP protocol follows a client-server model, where the client issues a request for a resource to the server. Requests and responses consist of several headers and, optionally, a body. Resources are identified using a URI (Uniform Resource Identifier).

Example conversation

The following is a typical client-server conversation of the kind which is carried out every time a page is loaded in a web browser. In this case, the user has entered the address 'http://www.lth.se/' in his browser and clicked. A 'request' is made by the browser to port 80 on the server www.lth.se, and the server responds with the home page. Comments are in italics and are not part of the actual conversation.

Request:

GET / HTTP/1.1                          Please transmit your root page, using HTTP 1.1,
Host: www.lth.se                        located at www.lth.se.
User-Agent: Mozilla/5.0 (Linux i686)    I am Mozilla 5.0, running on i686 Linux.
Accept: text/html                       I understand HTML-coded documents.
Accept-Language: sv, en-gb              I prefer pages in Swedish and British English.
Accept-Encoding: gzip, deflate          You may compress your content as gzip or deflate, if you wish.
Accept-Charset: utf-8, ISO-8859-1       I understand text encoded in Unicode and Latin-1.
                                        (Empty line signifies end of request)

Response:

HTTP/1.1 200 OK                         Request is valid; I am complying according to HTTP 1.1 (code 200).
Date: Wed, 26 May 2010 10:33:59         The time (where I am) is 10:33 [...].
Server: Apache/2.2.3 (Red Hat)          I am Apache 2.2.3, running on Red Hat Linux.
Content-Length: 54283                   The content you requested is 54,283 bytes long.
Content-Type: text/html; charset=utf-8  Prepare to receive text encoded in Unicode, to be interpreted as HTML.
<html>                                  (The webpage www.lth.se/ follows)
  <head>
    <title>Some page title</title>
...

These are a few additional example responses that could also occur:


Response:

HTTP/1.1 400 Bad Request                Request does not conform to HTTP/1.1, and I did not understand it.
                                        Please do not repeat the request in its current form.

Response:

HTTP/1.1 404 Not Found                  Request comforms with HTTP/1.1, but the resource requested was not found here.
                                        Please do not repeat the request for this resource.

Response:

HTTP/1.1 403 Forbidden                  Request is conformant and valid, but I refuse to comply under HTTP/1.1.
                                        Please do not repeat the request.

Request methods

HTTP clients can use one of eight request methods:

  • HEAD
  • GET
  • POST
  • PUT
  • DELETE
  • TRACE
  • OPTIONS
  • CONNECT

In practice, it is mainly the GET, POST and HEAD methods that are used in web applications, although protocols like WebDAV make use of others.

Status codes

Server responses include a status header, which informs the client whether the request succeeded. The status header is made up of a "status code" and a "reason phrase" (descriptive text).

Status codes classes

Status codes are grouped into classes:

  • 1xx (informational) : Request received, continuing process
  • 2xx (success) : The action was successfully received, understood, and accepted
  • 3xx (redirect) : Further action must be taken in order to complete the request
  • 4xx (client error) : The request contains bad syntax or cannot be fulfilled
  • 5xx (server error) : The server failed to fulfill an apparently valid request.

For example, if the client requests a non-existent document, the status code will be "404 Not Found".

According to the W3C consortium :

HTTP applications are not required to understand the meaning of all registered status codes, though such understanding is obviously desirable. However, applications MUST understand the class of any status code, as indicated by the first digit, and treat any unrecognized response as being equivalent to the x00 status code of that class, with the exception that an unrecognized response MUST NOT be cached.

All W3C status codes

All the codes are described in RFC 2616.

HTTP header and cache management

The HTTP message header includes a number of fields used to facilitate cache management. One of these, Etag (entity tag) is a string valued field that represents a value that should (weak entity tag) or must (strong entity tag) change whenever the page (or other resource) is modified. This allows browsers or other clients to determine whether or not the entire resource needs to be downloaded. The HEAD method, which returns the same message header that would be included in the response to a GET request, can be used to determine if a cached copy of the resource is up to date without actually downloading a new copy. Other elements of the message header can be used, for example, to indicate when a copy should expire (no longer be considered valid), or that it should not be cached at all. This can be useful, for example, when data is generated dynamically (for example, the number of visits to a web site).

HTTP server operations

Multiple virtual servers may map onto a single physical computer. For effective server use, they must be on networks engineered to handle the traffic with them; see port scanning for Internet Service Provider checking for servers placed where traffic can create problems.

References

  1. Request for Comments: 1945, Hypertext Transfer Protocol -- HTTP/1.0. IETF Network Working Group (May 1996). Retrieved on 2007-04-02.
  2. Request for Comments: 2616, Hypertext Transfer Protocol -- HTTP/1.1. IETF Network Working Group (June 1999). Retrieved on 2011-06-12.
  3. Rescorla, E. (May 2000), HTTP Over TLS, Internet Engineering Task Force, RFC2818
  4. Berners-Lee, Tim (March 1989), Tim Berners-Lee's proposal: "Information Management: a Proposal"