Perché abbiamo bisogno di così tanti transistor?


34

I transistor hanno molteplici scopi in un circuito elettrico, ad esempio interruttori, per amplificare i segnali elettronici, consentendo di controllare la corrente ecc ...

Tuttavia, di recente ho letto della legge di Moore, tra gli altri articoli casuali su Internet, che i moderni dispositivi elettronici hanno un numero enorme di transistor racchiusi in essi, con la quantità di transistor che si trovano nell'elettronica moderna nel raggio di milioni, se non miliardi.

Tuttavia, perché esattamente qualcuno dovrebbe aver bisogno di così tanti transistor comunque? Se i transistor funzionano come interruttori, ecc., Perché dovremmo averne bisogno di una quantità così assurda nei nostri moderni dispositivi elettronici? Non siamo in grado di rendere le cose più efficienti in modo da usare wayy meno transistor rispetto a quello che stiamo usando attualmente?


7
Suggerirei di capire di che pasta è fatto il tuo chip. Adders, Moltiplicatori, Multiplexer, Memoria, Più memoria ... E pensa al numero di queste cose che devono essere presenti lì ...
Dzarda

9

1
Anche l'uso continuo di transistor come sostituti della maggior parte dei dispositivi meccanici ha contribuito a plasmare l'elettronica di consumo moderna più di ogni altra cosa. Immagina che il tuo telefono clackering ogni volta che accende o spegne la retroilluminazione (pur essendo le dimensioni e il peso di un'auto)
Mark

7
Ci chiedi perché non possiamo "rendere le cose più efficienti" per usare meno transistor; si assume che cerchiamo di ridurre al minimo il numero di transistor. E se l'efficienza energetica fosse migliorata aggiungendo altro per il controllo? O più in particolare l'efficienza del tempo nel fare qualunque calcolo? L '"efficienza" non è una cosa sola.
OJFord

2
Non è che abbiamo bisogno di molti transistor per costruire una CPU, ma dal momento che possiamo realizzare tutti quei transistor, potremmo anche usarli in modo da rendere la CPU più veloce.
user253751

Risposte:


46

I transistor sono interruttori, sì, ma gli interruttori sono più che semplici per l'accensione e lo spegnimento delle luci.

Gli switch sono raggruppati in porte logiche. Le porte logiche sono raggruppate in blocchi logici. I blocchi logici sono raggruppati in funzioni logiche. Le funzioni logiche sono raggruppate in chip.

Ad esempio, una porta NAND TTL utilizza in genere 2 transistor (le porte NAND sono considerate uno dei mattoni fondamentali della logica, insieme a NOR):

schematic

simula questo circuito - Schema creato usando CircuitLab

Mentre la tecnologia passava da TTL a CMOS (che è ora lo standard di fatto), in pratica si è verificato un raddoppio immediato dei transistor. Ad esempio, il gate NAND è passato da 2 transistor a 4:

schematic

simula questo circuito

Un dispositivo di chiusura (come un SR) può essere realizzato utilizzando 2 porte CMOS NAND, quindi 8 transistor. È quindi possibile creare un registro a 32 bit utilizzando 32 infradito, quindi 64 porte NAND o 256 transistor. Un ALU può avere più registri, oltre a molte altre porte, quindi il numero di transistor cresce rapidamente.

Più complesse sono le funzioni del chip, più porte sono necessarie e quindi più transistor.

La tua CPU media in questi giorni è considerevolmente più complessa di un chip Z80 di 30 anni fa. Non solo utilizza registri di 8 volte la larghezza, ma le operazioni effettive che esegue (trasformazioni 3D complesse, elaborazione vettoriale, ecc.) Sono tutte molto più complesse di quanto i chip più vecchi possano eseguire. Una singola istruzione in una CPU moderna può richiedere molti secondi (o persino minuti) di calcolo in un vecchio 8 bit, e tutto ciò che viene fatto, alla fine, con più transistor.


NAND = 4 non 2 Transistor e FF sono più di solo 2 NOR
segnaposto

2
Oh mio! hai davvero bisogno di ripensarlo. Mostra anche UN solo design che ha milioni di transistor realizzati in bipolare !! TUTTI questi disegni sono CMOS,
segnaposto

2
Punto valido. Aggiunto un secondo schema per evidenziare la differenza e il successivo raddoppio dei transistor proprio da quello.
Majenko,

3
pullup debole vs forte è un problema completamente diverso da TTL vs CMOS. I BJT arrivano in PNP, dopo tutto. Il CMOS non comporta il "raddoppio dei transistor". Integrazione su larga scala, poiché i transistor sono molto più piccoli dei resistori pull-up in qualsiasi processo ASIC.
Ben Voigt,

1
Quella non è una porta NAND TTL. Questa è una porta logica RTL.
fuzzyhair2,

16

Ho controllato il fornitore locale di vari dispositivi a semiconduttore e il più grande chip SRAM che avevano era 32Mbit. Sono 32 milioni di singole aree in cui è possibile memorizzare un 1 o uno 0. Dato che è necessario "almeno" 1 transistor per memorizzare 1 bit di informazioni, quindi si tratta di 32 milioni di transistor al minimo assoluto.

Cosa ti offre 32 Mbits? Sono 4 Mbyte o circa le dimensioni di un file musicale MP3 di 4 minuti di bassa qualità.


EDIT - una cella di memoria SRAM secondo il mio googling assomiglia a questo: -

enter image description here

Quindi, sono 6 transistor per bit e più come 192 milioni di transistor su quel chip che ho citato.


... e ora immagina 8 GB di memoria con 68719476736 bit di informazioni
Kamil

1
... tranne che non usano transistor nella DRAM.
Majenko,

1
@Majenko: almeno non tanto quanto per altre tecnologie. 1 transistor + 1 condensatore (ovviamente su microscopio) per 1 bit - se ricordo bene.
Rev.1.0

28
Each bit of SRAM is at least 4 and often 6 transistors so 128 million transistors or more. DRAM doesn't use transistors for storage - but each bit (stored on a capacitor) has its own transistor switch to charge the cap.
Brian Drummond

6
Now imagine the transistors in a 1T SSD (granted 3 bits/cell, and it's on more than one chip) but that's still 2.7 trillion transistors just for the storage- not counting addressing, controlling and allowance for bad bits and wear).
Spehro Pefhany

7

I think the OP may be confused by electronic devices having so many transistors. Moore's Law is primarily of concern for computers (CPUs, SRAM/DRAM/related storage, GPUs, FPGAs, etc.). Something like a transistor radio might be (mostly) on a single chip, but can't make use of all that many transistors. Computing devices, on the other hand, have an insatiable appetite for transistors for additional functions and wider data widths.


3
Radios these days are computing devices, or at the very least contain them. Digital synthesis of FM frequencies, DSP signal processing of the audio (a biggie), digital supervisory control of station switching and so on. For example, the TAS3208 ti.com/lit/ds/symlink/tas3208.pdf
Spehro Pefhany

1
You're still not going to see tens or hundreds of million, much less billions, of transistors used for a radio. Sure, they're becoming small special-purpose computers with all that digital function, but nothing on the scale of a multicore 64 bit CPU.
Phil Perry

@PhilPerry surely a digital radio has something like an ARM in it? Not billions of transistors, but well into the tens of millions.

Well, if you've crossed "the line" from analog radio to a computer that (among other things) receives radio signals, you'll use lots of transistors. My point still stands that the OP's question about electronic devices sounds like confusion between classic analog radios, etc. and computing devices. Yes, they perform in very different manners even if they're both black boxes pulling music out of the air.
Phil Perry

4

As previously stated, SRAM requires 6 transistors per bit. As we enlarge our caches (for efficiency purpose), we require more and more transistors. Looking at a processor wafer, you may see that cache is bigger than a single core of a processor, and, if you look closer at the cores, you will see well organized parts in it, which are also cache (probably data and instruction L1 caches). With 6MB of cache, you need 300 millions transistors (plus the addressing logic).

But, also as previously stated, transistors are not the only reason to increase the number of transistors. On a modern Core i7, you have more than 7 instructions executed per clock period and per core (using the well-known dhrystone test). This means one thing : state-of-the-art processors do a lot of parallel computing. Doing more operations at the same time requires to have more units to do it, and very cleverer logic to schedule it. Cleverer logic requires much more complex logical equations, and so much more transistors to implement it.


SRAM has not required 6 transistors in quite a few years. In fact 6T Sram is pretty wasteful when you get use 1T 2T or 4T srams as essentially drop in replacements.
cb88

2

Stepping away from the details a bit:

Computers are complex digital switching devices. They have layer upon layer upon layer of complexity. The simplest level is logic gates like NAND gates, as discussed, Then you get to adders, shift registers, latches, etc. Then you add clocked logic, instruction decoding, caches, arithmetic units, address decoding, It goes on and on and on. (Not to mention memory, which requires several transistors per bit of data stored)

Every one of those levels is using lots of parts from the previous level of complexity, all of which are based on lots and lots of the basic logic gates.

Then you add concurrency. In order to get faster and faster performance, modern computers are designed to do lots of things at the same time. Within a single core, the address decoder, arithmetic unit, vector processor, cache manager, and various other subsystems all run at the same time, all with their own control systems and timing systems.

Modern computers also have larger and larger numbers of separate cores (multiple CPUs on a chip.)

Every time you go up a layer of abstraction, you have many orders of magnitude more complexity. Even the lowest level of complexity has thousands of transistors. Go up to high level subsystems like a CPU and you are talking at least millions of transistors.

Then there's GPUs (Graphics Processing Units). A GPU might have a THOUSAND separate floating point processors that are optimized to do vector mathematics, and each sub-processor will have several million transistors in it.


1

Without attempting to discuss how many transistors are needed for specific items, CPU's use more transistors for increased capabilities including:

  • More complex instruction sets
  • More on-chip cache so that fewer fetches from RAM are required
  • More registers
  • More processor cores

1

Aside from increasing raw storage capacities of RAM, cache, registers and well as adding more computing cores and wider bus widths (32 vs 64 bit, etc), it is because the CPU is increasingly complicated.

CPUs are computing units made up of other computing units. A CPU instruction goes through several stages. In the old days, there was one stage, and the clock signal would be as long as the worst-case time for all the logic gates (made from transistors) to settle. Then we invented pipe lining, where the CPU was broken up into stages: instruction fetch, decode, process and write result. That simple 4- stage CPU could then run at a clock speed of 4x the original clock. Each stage, is separate from the other stages. This means not only can your clock speed increase to 4x (at 4x gain) but you can now have 4 instructions layered (or "pipelined") in the CPU, resulting in 4x the performance. However, now "hazards" are created because one instruction coming in may depend on the previous instruction's result, but because it's pipelined, it won't get it as it enters the process stage as the other one exits the process stage. Therefore, you need to add circuitry to forward this result to the instruction entering the process stage. The alternative is to stall the pipeline which decreases performance.

Each pipeline stage, and particularly the process part, can be sub-divided into more and more steps. As a result, you end up creating a vast amount of circuitry to handle all the inter-dependencies (hazards) in the pipeline.

Other circuits can be enhanced as well. A trivial digital adder called a "ripple carry" adder is the easiest, smallest, but slowest adder. The fastest adder is a "carry look-ahead" adder and takes a tremendous exponential amount of circuitry. In my computer engineering course, I ran out of memory in my simulator of a 32-bit carry look-ahead adder, so I cut it in half, 2 16 bit CLA adders in a ripple-carry configuration. (Adding and subtracting are very hard for computers, multiplying easy, division is very hard)

A side affect of all this is as we shrink the size of transistors, and subdivide the stages, the clock frequencies can increase. This allows the processor to do more work so it runs hotter. Also, as the frequencies increase propagation delays become more apparent (the time it takes for a pipeline stage to complete, and for the signal to be available at the other side) Due to impedance, the effective speed of propagation is about 1 ft per nanosecond (1 Ghz). As your clock speed increases, it chip layout becomes increasingly important as a 4 Ghz chip has a max size of 3 inches. So now you must start including additional buses and circuits to manage all the data moving around the chip.

We also add instructions to chips all the time. SIMD (Single instruction multiple data), power saving, etc. they all require circuitry.

Finally, we add more features to chips. In the old days, your CPU and your ALU (Arithmetic Logic Unit) were separate. We combined them. The the FPU (Floating point unit) was separate, that got combined too. Now days, we add USB 3.0, Video Acceleration, MPEG decoding etc... We move more and more computation from software into hardware.


1

Majenko has a great answer on how the transistors are used. So let me instead go from a different approach vector and deal with efficiency.

Is it efficient to use as few transistors as you can when designing something?

This basically boils down to what efficiency you're talking about. Perhaps you're a member of a religion that maintains it is necessary to use as few transistors as possible - in that case, the answer is pretty much given. Or perhaps you're a company building a product. Suddenly, a simple question about efficiency becomes a very complicated question about the cost - benefit ratio.

And here comes the kicker - transistors in integrated circuits are extremely cheap, and they're getting ever cheaper with time (SSDs are a great example of how the cost of transistors was pushed down). Labor, on the other hand, is extremely expensive.

In the times when ICs were just getting started, there was a certain push to keep the amount of components required as low as possible. This was simply because they had a significant impact on the cost of a final product (in fact, they were often most of the cost of the product), and when you're building a finished, "boxed" product, the labor cost is spread out over all the pieces you make. The early IC-based computers (think video arcades) were driven to as small per-piece cost as possible. However, the fixed costs (as opposed to per-piece costs) are strongly impacted by the amount you are able to sell. If you were only going to sell a couple, it probably wasn't worth it to spend too much time on lowering the per-piece costs. If you were trying to build a whole huge market, on the other hand, driving the per-piece costs as low as possible had a pay-off.

Note an important part - it only makes sense to invest a lot of time in improving the "efficiency" when you're designing something for mass-production. This is basically what "industry" is - with artisans, skilled labor costs are often the main cost of the finished product, in a factory, more of the costs comes from materials and (relatively) unskilled labor.

Let's fast forward to the PC revolution. When IBM-style PC's came around, they were very stupid. Extremely stupid. They were general purpose computers. For pretty much any task you could design a device that could do it better, faster, cheaper. In other words, in the simplistic efficiency view, they were highly inefficient. Calculators were much cheaper, fit in your pocket and run for a long time of a battery. Video game consoles had special hardware to make them very good at creating games. The problem was, they couldn't do anything else. PC could do everything - it had a much worse price / output ratio, but you weren't railroaded into doing a calculator, or a 2D sprite game console. Why did Wolfenstein and Doom (and on Apple PC's, Marathon) appear on general purpose computers and not on game consoles? Because the consoles were very good at doing 2D sprite-based games (imagine the typical JRPG, or games like Contra), but when you wanted to stray away from the efficient hardware, you found out there's not enough processing power to do anything else!

So, the apparently less efficient approach gives you some very interesting options:

  • It gives you more freedom. Contrast old 2D consoles with old IBM PCs, and old 3D graphics accelerators to modern GPUs, which are slowly becoming pretty much general purpose computers on their own.
  • It enables mass-production efficiency increases even though the end products (software) is "artisan" in some ways. So companies like Intel can drive the cost of unit of work down much more efficiently than all the individual developers all over the world.
  • It gives more space for more abstractions in the development, thus allowing better reuse of ready solutions, which in turn allows lower development and testing costs, for better output. This is basically the reason why every school-boy can write a full-fledged GUI-based application with database access and internet connectivity and all the other stuff that would be extremely hard to develop if you had to always start from scratch.
  • In PCs, this used to mean that your applications basically got faster over time without your input. The free-lunch time is mostly over now, since it's getting harder and harder to improve the raw speed of computers, but it shaped most of the PC's lifetime.

All this comes at a "waste" of transistors, but it's not real waste, because the real total costs are lower than they would be if you pushed for the simple "as few transistors as possible".


1

Another side of the "so many transistors" story is that these transistors are not individually designed-in by a human. A modern CPU core has on the order of 0.1 billion transistors, and no human designs every one of those transistors directly. It wouldn't be possible. A 75 year lifetime is only 2.3 billion seconds.

So, to make such huge designs feasible, the humans are involved in defining the functionality of the device at a much higher level of abstraction than individual transistors. The transformation to the individual transistors is known as circuit synthesis, and is done by very expensive, proprietary tools that collectively cost on the order of a billion dollars to develop over the years, aggregating among the major CPU makers and foundries.

The circuit synthesis tools don't generate designs with the least number of transistors possible. This is done for a multitude of reasons.

First, let's cover the most basic case: any complex circuit can be simulated by a much simpler, perhaps serial, CPU, with sufficient memory. You can certainly simulate an i7 chip, with perfect accuracy, if only you hook up enough serial RAM to an Arduino. Such a solution will have much less transistors than the real CPU, and will run abysmally slowly, with an effective clock rate of 1kHz or less. We clearly don't intend the transistor number reduction to go that far.

So we must limit ourselves to a certain class of design-to-transistors transformations: those that maintain the parallel capacity built into the original design.

Even then, the optimization for minimal number of transistors will likely produce designs that are not manufacturable using any existing semiconductor process. Why? Because chips that you can actually make are 2D structures, and require some circuit redundancy simply so that you can interconnect those transistors without requiring a kilogram of metal to do so. The fan-in and fan-out of the transistors, and resulting gates, does matter.

Finally, the tools aren't theoretically perfect: it'd usually require way too much CPU time and memory to generate solutions that are globally minimal in terms of transistor numbers, given a constraint of a manufacturable chip.


0

I think what the OP needs to know is that a 'simple switch' often needs several transistors? Why? Well, for many reasons. Sometimes extra transistors are needed so that power usage is low for either 'on' or 'off' state. Sometimes transistors are needed to deal with uncertainties in voltage inputs or component specifications. A lot of reasons. But I appreciate the point. Look at the circuit diagram for an OP-AMP and you see a few dozen transistors! But they wouldn't be there if they didn't serve some purpose to the circuit.


0

Basically all the computer understands is 0s and 1s.. which is decided by these switches.. Yes, the transistors' functions are more than that of switches. So if a switch is can decide if the output has to be a 0 or a 1 (assuming that as a single bi operation), the more the number of bits. the more transistors.. so no wonder why we have to embed millions of transistors into a single microprocessor.. :)


0

In the era of technology, we need smart devices (small, fast and efficient). These devices are made up of integrated circuits (IC's) which contain a no. of transistors. We need more and more transistors to make IC smarter and faster because in electronics, every circuit in an IC is made from an adder, subs-tractor, multiplier, divider, logic gates, registers, multiplexers, flip flops, counters, shifters, memories and microprocessors etc. to implement any logic in devices and these are made up of transistors only (MOSFETs). With the help of transistors, we can implement any logic. So we need more and more transistors.....

enter image description here

Utilizzando il nostro sito, riconosci di aver letto e compreso le nostre Informativa sui cookie e Informativa sulla privacy.
Licensed under cc by-sa 3.0 with attribution required.