There is an assembler that I am writing that is located within the file asm.c
located in this repository. It uses the instruction set located in the specs
file to produce an output binary. (The program that would run this binary has not yet been made - the beginning is located in main.c
.) Using the example program named echochar.sdmasm
, the assembler outputs the desired binary. Here it is in hex:
90 00 a0 00
But it only does this so far on a Windows machine under Cygwin. (I have not yet tested it under Linux.) On an Intel-based Mac, this is the resultant binary:
00 90 00 a0
This looks like a difference in endianness, but I thought that this could only happen when two processors are completely different. This seems to be and endian difference between operating systems, not processors. Is this really the case, or is something else going on here that I am not getting?
Just managed to test it on Linux - the output error occurs as it does on the Mac.
Okay, something else is going on entirely. Output from hd
on Linux:
00000000 00 90 00 a0 |....|
00000004
Output from hexdump
on Linux:
0000000 9000 a000
0000004
This is really odd. I can't tell which one is the correct output.