Microchip technology, from creation to shortage in the supply chain

1961: costly progress

As the microchip began to be distributed more widely, it began to be used by the United States Air Force to manufacture missiles and by NASA in the Apollo projects. At this point, a single microchip cost $31.

1965: Moore’s Law

Intel co-founder Gordon E. Moore claimed that the number of transistors on a chip is doubling every two years, although the cost of computers has been halved. This statement, which later became known as Moore’s Law, suggested that computers would become cheaper as their capabilities increased.

1971: Cost reduction through mass production in the supply chain

Half a decade later, Moore’s Law has been proven correct. With investment from the US government, mass production of the chips reduced their cost to US$1.25.

“It was the government that created the high demand that facilitated the mass production of the chip,” explained Fred Kaplan, author of 1959: The year everything changed.

1986: Cost control with the Semiconductor Agreement

However, Moore had not considered how competing international interests and trade wars would affect microchip manufacturing. The semiconductor agreement between the United States and Japan ensured that manufacturing prices were fixed so that competition in the supply chain did not spiral out of control.

1998: The first microchip is implanted in a human being

The first microchip-to-human experiment took place at the end of the 20e century. Professor Kevin Warwick, director of cybernetics at the University of Reading, was the first human in history to have a microchip implanted in his body.

After a week, the microchip was removed. Warwick said smart card-activated doors opened for him and lights would flash around him.

Comments are closed.