Classically a compiler has three parts: lexical analysis, parsing, and code generation. Lexical analysis breaks up the text of the program into language keywords, names, and values. Parsing figures how the tokens that come from the lexical analysis are combined in syntactically correct statements for the language. Code generation takes the data structures produced by the parser, and translates them into machine code or some other representation. Nowadays the lexical analysis and parsing may be combined into a single step.
Clearly the person writing the code generator has to understand the target machine code at a very deep level, including instruction sets, processor pipelines and cache behavior. Otherwise the programs produced by the compiler would be slow and inefficient. They very well might be able to read and write machine code as represented by octal or hexadecimal numbers, but they'll generally write functions to generate the machine code, referring internally to tables of machine instructions. Theoretically the folks writing the lexer and the parser might not know anything about the generation of the machine code. In fact, some modern compilers let you plug in your own code generation routines which might emit machine code for some CPU the lexer and parser writers have never heard of.
However, in practice compiler writers at each step know a lot about different processor architectures, and that helps them design the data structures the code generation step will need.