Without any further coordination, at least one writer plus one reader can result in a classic race condition.
There are a number of factors involved.
If there is only one memory location involved (a byte, or aligned word) it is possible that two threads, one writer and one reader, accesing the same location, do effectively communicate. (Alignment is usually important in the context of the professor's memory model, because unaligned data acts like two or more independent memory locations)
However, keeping within these limitations alone does not allow a generous or rich interaction between two threads.
Involve more than one memory location or more than one writer and explicit synchronization is almost certainly required.
There are various processor instructions that facilitate synchronization.
One set works like an atomic read-modify-write, and allows multiple writers to do, among other things, increment a counter without loosing any counts. These are sometimes implemented as compare-and-swap instructions. There are a number of variations, including paired insructions load-linked and stored-conditional.
There are also memory barrier instructions that tell the processor something about when and how to flush individual processor caches to common main memory.
These primitives can be used to build larger locks. Most operating systems will provide some rich thread synchronization capabilities that are in some way built on these hardware primitives.
Programming languages and operating systems expose these hardware primitives thru locking, synchronized methods & blocks, and volatile variables.
Transactions and or transactional memory is another very interesting feature having some underlying, new hardware support, but is still very new.