IEEE 802.3

From Citizendium
Revision as of 16:00, 30 August 2024 by Suggestion Bot (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

IEEE 802.3 is a working group of the IEEE, which produces technical standards in the technology family generically called Ethernet. Originally, the 802.3 scope was limited to local area networks using carrier sense multiple access with collision detection (CSMA/CD) medium access control, but some of the later projects still use things such as the original frame format, but operate on media where CSMA/CD is irrelevant.

In the process of standardization, some improvements, generally backward compatible with DIX Ethernet, were made in the specification. The 802.3 committee, however, has remained active, building on the original Ethernet work but creating literally dozens of standards for communications systems undreamed-of by the original inventors. That which is called "Wireless Ethernet" actually comes from IEEE 802.11, with the "WiMax" variant from IEEE 802.16.

DIX Ethernet

The original 802.3 specifications specified a physical medium and means of connection to it, as well as a medium access control (i.e., a subset of data link protocol. There were variants:

  • 10BASE5: run on semirigid coaxial cable, up to 500 meters long, using "baseband" technology of direct current pulses
  • 10BASE2: using a thinner, more flexible coaxial cable, 185 to 200 meters long, again using baseband

Additional modes operated in a "broadband" mode as a modulated radio frequency signal on cable television distribution systems, and with optical signaling on optical able.

All of these variants had a data rate of 10 megabits per second (Mbps).

Physical aspects

10BASE5

Originally, the medium was a specified coaxial cable with a maximum length of 500 meters. A resistor terminator was connected at each end.

This main cable was semirigid, and really could not be bent into a sufficiently flexible shape to connect directly to the computers. To allow the necessary flexibility in computer connection, there were two means of making the actual cable connection:

  • T-connector, where the cable was cut at the desired point of attachment, a connector placed on each of the cut ends, and the two ends and a drop cable were connected to a T-shaped connector that gave a common path to the center and coaxial shield conductors of all the cables. Inserting the T-connector and making the three connections to it restored cable operation.
  • Vampire tap, in which the cable was placed in one half of a pair of mechanical connectors that had a half-cylinder groove to accept the cable. The other half was put over the cable, encircling it complely, and the cable holder was fastened. Next, a nut was tightened that drove a "vampire" insulation-piercing tap, at right angles to the restrained cable, such that the inner "fang" made contact with the center conductor of the cable, and an outer "fang" made contact with the coaxial shield. A drop cable was then attached to the outside connector of the vampire assembly. In principle, but not always practice, vampire taps allowed continued Ethernet operation while it was being attached, because the cable was never cut.

From the tap, for which the more modern term is medium-dependent interface (MDI), a coaxial drop cable ran to another box called a transceiver. The transceiver had two connectors, one for the coaxial drop cable, and the other a 15-pin "D-subminiature" type connected to an attachment unit interface (AUI) cable made up of twisted pairs of copper wire, not coaxial cable.

The D-subminiature connector had two rows of pins arranged in a trapezoidal form, and a means of fastening it to the transceiver and to the computer's AUI interface. While the original connector called for a "slide latch" that required no tools to fasten, the slide latch was extremely unreliable in practice, and probably received almost as many foul oaths from installation engineers than it received bits from the computer. While the standard never changed, the usual fasteners were machine screws.

10BASE2

This variant also used coaxial cable, but a thinner and more flexible version that could run into offices and use a T-connector to interface to a local computer. Typically, the cable appeared on a wall plate with an upstream and downstream side. When no computer was in the room, continuity was kept with a jumper cable between the two. If a computer was in the room, a T-connector would be inserted between upstream and downstream, and a length of coaxial cable run to the computer, which had a matching connector.

Separate transceivers were generally not used, unless the computer only had an AUI interface. In those cases, a T-connector was still used, but the "drop cable" from the T-connector went to the transceiver, not the computer.

10BASET

10BaseT was a radical change from the shared coaxial cable medium, as it assumed point-to-point wiring with much cheaper, more flexible twisted-pair copper wire, which connected to either a multiport repeater that combined them into an effectively shared medium, or to ports of a bridge (computers) or router. 10BaseT uses RJ45 snap-in plastic connectors.

Medium access control

Contention for the medium was minimized, and resolved when it occurred, using carrier sense multiple access with collision detection (CSMA/CD) technology. In very simplified terms, the transceiver would not transmit as long as it detected a transmission in progress. If it sensed a clear line, it would transmit, but continued to monitor the line to detect if another device had simultaneously sensed a clear line and started to transmit, causing a collision. CSMA/CD then provided mechanisms for detecting the collision and breaking a tie among the devices waiting to transmit.

Once bits could be transmitted, the 802.3 frame was sent, which had several fields; all lengths of which are specified in 8-bit bytes:

Field Length Purpose
Preamble 8 bytes physical layer overhead
Destination address 6 bytes Station on the medium to receive the frame
Source address 6 bytes Station on the medium that sent the frame
(when IEEE 802.1q is running) 2 bytes Frame priority and virtual local area network (VLAN) number
Length 2 bytes Length of the data field, with a maximum of 1500 bytes. If there were less than 64 data bytes, the 802.3 implementation would insert padding so that it was always at least 64 bytes long; this eliminated a timing problem in the older DIX Ethernet
Data Up to 1500 bytes Information payload
Frame check sequence 4 bytes Computation on the other frame bits to detect transmission errors

The need for 802.2

Since the field previously used to identify the payload was now used for length, the IEEE 802.2 Logical Link Control protocol was introduced. An 802.2 header would be either 3 or 8 bytes long, and occupy the start of the data field, causing the actual maximum number of data that could be sent to be 1497 or 1492 bytes.

802.3 extensions

list to be provided