2

I've heard the term core and thread being used synonymously a lot of times. In Minecraft, if you press F3 to show coordinates, it tells you how many threads you have in your CPU but uses the term "core" for them. I know some things about threads and cores, but they are almost indistinguishable. Both can run programs and have memory, but there are typically more threads and the threads can also appear in GPUs. Is there any difference in architecture between cores and threads?

tobalt
  • 18,646
  • 16
  • 73
Trevor Mershon
  • 368
  • 2
  • 11

5 Answers5

2

Please read this Wikipedia page.

A CPU core is a unit that can perform instructions.

Most CPU cores can only process one instruction at the same time.

Some CPU cores can process more (usually two) instructions at the same time. Intel calls this hyperthreading, to the operating system (Windows or Linux, etc.) and therefore the application software (for example: Minecraft) such a CPU looks like it has two (logical) cores since it can process two instructions at the same time. At least, that's what it looks like to the software (operating system and application).

A thread is a "string of instructions" that are processed on one logical CPU core.

So a single-core but hyperthreading CPU can process two instructions at the same time so it can handle two threads at the same time as well.

To complicate things, the software switches (in time) between many threads so usually there are a lot more threads active than there are logical CPU cores present. But that's OK, if processed quickly enough, you wouldn't notice this as a user. This is because most threads (tasks) can wait a little bit for their turn to use the CPU. Only for a quick response a thread might be assigned a higher priority so that it gets preference over other threads.

Bimpelrekkie
  • 80,139
  • 2
  • 93
  • 183
  • 3
    Processing two instructions at the same time means you are superscalar, not that you have SMT (hyperthreading). SMT/hyperthreading means that you process two (or more) threads at the same time (but not necessarily instructions). There are processors out there with SMT that cannot issue from more than one thread at the same time (e.g. Hexagon). – user1850479 Mar 24 '20 at 00:26
1

In terms of Hyperthreading, another core always means another thread is available with which to do work, but an extra thread doesn't necessarily mean an extra core is present.

Think of people as cores and the number of hands they have potential threads. One-handed people are single-threading cores a two-handed people are multi-threading cores

In that case, if you add an extra person, you are unequivocally adding capability of doing at least one extra task simultaneously.

If the person is one-handed they can only handle one task at a time. If the person is two handed they might be able to handle two tasks at a time, depending on what the particular tasks are; It's not a given that they will be able to handle any two pair of tasks simultaneously. Having a single two-handed person is not as performant as having two one-handed people, but sometimes it is cheaper to squeeze more utility out of the one person than to add another person.

DKNguyen
  • 54,733
  • 4
  • 67
  • 153
  • 1
    If people are cores, then hands are arithmetic units, and hardware threads are multiple tables in front of the person. If one table (task) is blocked because it's waiting for delivery of data from across the room, the person can grab something from the other table and do useful work while waiting for inputs for the first task... without having to do a context switch of picking up everything on a table, placing it into a crate, unloading another crate onto the table. The second table is just there within arm's reach ready to work on. – Ben Voigt Jan 26 '23 at 23:19
1

Core is physical processor. Multi-threading is capability to run multiple threads on a single core, thus multiple threads have to share resource available by the core. If thread occupies all resource of the core, another thread cannot run on the same core.

Rusk Box
  • 792
  • 4
  • 6
0

This answer focusses on CPUs, I do now know enough about GPUs to speak about how GPU vendor use terminology.


Normally the difference is that each "core" has a dedicated set of execution units, while the hardware threads within a "core" share execution units.

Having multiple threads per core, potentially makes it easier to keep the execution units busy, when one thread is waiting on (for-example) a memory fetch the other can still be performing useful processing.

But there are edge cases, for example AMD's bulldozer had "modules", each of which contained two "cores". The "cores" within a module had dedicated resources for integer processing but used shared resources for floating point processing. There were many arguments of whether what AMD called cores really deserved to be called cores, or whether each module should be considered a core with the units within a module being called threads.


In Minecraft, if you press F3 to show coordinates, it tells you how many threads you have in your CPU but uses the term "core" for them.

Systems with multiple CPU chips have been around a lot longer than systems with multiple cores on the same chip, or CPU cores that support multiple threads of execution.

When hyperthreading and multicore came along, they re-used the existing support for multiprocessor systems. The result is that when you look at APIs and data structures used by operating systems and programming environments, they usually use the term CPU for what CPU marketers and PC builders would call a "thread".

It doesn't help that the term "thread" also has a meaning in software. One can talk about "hardware threads" and "software threads" but that very quickly gets tedious.

That said when I look at pictures of the Minecraft debug screen, I don't see what you observe. I see a number at the start followed by a x, but it doesn't explain what that number means. I also sometimes see cores mentioned in the description of the processor type, but in that case they seem to be in-line with the CPU vendors marketing.

Peter Green
  • 21,158
  • 1
  • 38
  • 76
-1

CPU threads don't have all functional units that make a whole CPU (or CPU core). Typically they don't have their own L1 cache but share it with the other threads on the same core. They may also lack their own ALU but have to share it with the other threads on the same core.

If you have a multicore CPU, it's still useful to give each core two threads so there could be one copy operation and one ALU operation executed at the same time per core. More than two threads aren't too useful.

CPU threads are most useful if the software runs multiple threads of related functions and their data in parallel, so all required data can be accessed through the L1 cache. They are less useful when there are context switches.

Janka
  • 13,636
  • 1
  • 19
  • 33
  • Superscalar processors, for example modern x86, will already execute out-of-order to make the most of available execution units. Modern compilers can optimise for this condition. Without hyper-threading (or the equivalent from another manufacturer) you only have one pipeline leading into the execution units, so I’m not sure why you’d change whatever defaults your OS scheduler uses. – David Mar 23 '20 at 23:30
  • Hyper-Threading is from the 1990ies, when such reordering compilers were exotic at least on the x86 platform. It also works for binary applications. It's a patch fixing the broken PC software model of that time. – Janka Mar 24 '20 at 04:10
  • @Janka: automatic reordering was available in hardware on x86 (Intel Pentium Pro, AMD K5, 1996) considerably before hyper-threading (Pentium 4, 2002) and compilers were reordering instructions (aka "instruction scheduling") before either. In fact, a large number of optimizations (e.g. common subexpression elimination) aren't even possible without reordering. – Ben Voigt Jan 26 '23 at 23:16