open-roms

JiffyDOS fast protocol description

Here is a description of the JiffyDOS protocol, as implemented in Open ROMs, from the perspective of IEC bus controller. It is partially based on the documentation found on the internet, and partially reverse engineered by observing IEC logs generated by VICE emulator.

For legal reasons, no disassembling of the JiffyDOS ROMs was performed (only IEC transmission logs were checked), I also refrained myself from checking non-commercial 6502 JiffyDOS implementations (as they might be based on disassembled ROMs) - this, unfortunately, ruled out most of the documentation from www.nlq.de page, as it mostly shows the commented original source code. Thus, the information provided here might be incomplete or not fully accurate!

Documents, samples

Also please make sure you are familiar with the IEC protocol in general, I highly recommend studying a guide by Michael Steil, available here: https://www.pagetable.com/?p=1135.

Tools

Timing information

When sending/receiving a byte, the JiffyDOS protocol timing is much more strict than the standard IEC. It is not enough to mask the interrupts - one has to make sure nothing will steal the CPU cycles. There are two ways to achieve this:

Timing details are omitted by purpose - they are already described in [2], besides most of the time it was determined by observing VICE IEC logs and (for example) adding NOPs when necessary. The protocool is not cycle-exact, so our implementation probably behaves slightly differently than the original one.

Protocol detection

Protocol detection happens when the IEC bus controller sends a command (under ATN) - first 7 bits of the commnad code is sent normally, detection is performed before sending the last bit:

Sending a command

IEC commands are sent normally (with the additional protocol detection flow before the last bit). Althought it would be enough to perform detection within the TALK and LISTEN commands only, the original JiffyDOS ROMs tries to detect it every time. Omitting the detection is definitely not safe (depending on the command, it migh even cause the communication with device to fail, at least the 1541 JiffyDOS ROM shows such behavior). Moreover, it seems that only TALK/LISTEN commands cause the device to respond by pulling DATA; the controller side implementation has to be prepared for this.

Turnover

Turnover (both to listener and to back to talker) is performed as usual.

Sending a byte

Receiver releases DATA line once it is ready to receive a byte. When the bus controller is ready too (after possible ‘badline synchronization’ - this wasn’t tested, but artificial delay might be needed when maintaining synchronization by screen blanking) it releases the CLK on his side.

What happens now has to be carefully synchronized between bus controller and receiver, there is not much margin in terms of CPU cycles - see the source code and [2] for details. Basically, we are sending 2 bits a time (using CLK and DATA lines), without any sort of confirmation, and the only synchronization is done via instruction timing:

Encoding the low nibble is done via lookup table, reverse engineered by observing the IEC logs from the original JiffyDOS ROMs in action. Unfortunately, IEC registers of the C64 and 1541 do not have the same layout - some bits are reversed and the bit layout is different (see [3]). Thus every single fast IEC protocol has to perform some kind of encoding/decoding; when sending byte using the JiffyDOS protocol the drive handles the high nibble and the controller - the lower one.

Afterwards the controller either pulls the CLK quickly, or signals EOI by delaying this action. In our implementation the acceptable timing was adjusted by experimentation (pre-existing delay subroutine was used to reduce code size, also note that NTSC machines have slightly higher CPU frequency) - but it’s important not to wait too long, 1541 JiffyDOS ROM can handle it, but SD2IEC implementation can not!

Receiving a byte

Device ready to send a byte signals it by releasing CLK, bus controller performs the ‘badline synchronization’ (note: if screen blanking method was choosen, artificial delay has to be inserted, otherwise communication with 1541 fails; also note the slightly faster CPU frequency on NTSC machines!). Controller releases the DATA line to signal it is ready to accept bits.

Again, what happens now has to be carefully synchronized between bus controller and receiver, see [2]. Bits are received in the order shown below (using CLK and DATA lines), without any sort of confirmation, and the only synchronization is done via instruction timing:

No decoding is necessary (we just need to put received bits on proper positions by bit shifting and EOR manipulation to cancel the effect of non-IEC bits of the CIA register). I had to insert some delays (again - watch out for PAL/NTSC clock difference!) to make the protocol work.

At the end the controller fetches the CLK/DATA bits and pulls DATA. If CLK is set at certain point, it means EOI (I’m not 100% sure about that, but such an implementation seems to work correctly).