Register $1c0f on VIA2 in the Commodore 1541 diskdrive is pretty much undocumented. This blog will not go on about the inner workings of the 1541 diskdrive, there are many great resouces for that, and I assume you have read them, or will be able to read up on the matter before reading further. ;-)
The 16th register ($1c0f) of the VIA2 chip (which is the 6522 chip) in the drive is documented as doing the same as the 2nd register ($1c01): port A input/output register. The difference is that with $1c0f no handhaking will occur.
By setting input latching to disabled (meaning data level on port A pins will not be buffered until fetched), register $1c0f will simply reflect the data on the pins at all times. This means that reading $1c0f will fetch the state of port A pins. Port A is a parallel system of 1 lines, each representing 1 bit state. So all 8, PA0-PA7, make up one byte.
VIA2 is connected to the reading circuit of the 1541, and the bits read as the disk spins will arrive at PA0-PA7, actually shifted unto them. Since the ideal speed of the spinning disk is 300 rpm, there is a capped number of cycles per byte read. We know the disk in divided into 4 speed zones, because of slower speeds as you move closer to the center of the disk. To maintain roughly the same speed, the decoder speed of incoming flux changes (that are translated into bit states) is different for each of the 4 zones. Bottom line is that ideally, each bit read from the disk would take 3,25 microseconds, 8 bits then being 26 microseconds. As the 1541 has a true 1 Mhz clock speed, this means in theory every 26 cycles 8 new bits would appear on the 8 lines.
Since reading $1c0f would tell us the state of the pins at any given time, can we time reading these bits so that we get the correct bytes ? This turns out to be mostly impossible, unless you are willing to do some precise software sync based on custom bit patterns, before reaching actual sector data.
The reason for this is two-fold. You need to know when the first bit of the first byte comes. You might use the BYTE-READY interrupt to get close to a starting point, however, this relies on the use of a branch instruction looping (ie bvc) , and depending on the speed of the disk at that time, and some hardware timing, the BYTE-READY signal may come at different times. This can cause the branch to occur once more, or not, adding up to 2 to 3 cycle difference, before the routine continues, versus the "optimal" of just 1 cycle. So a read of $1c0f after that may already be offset by 2 microseconds versus where the decoder is at as the drive spins, right off the bat.
The second reason is that the drive doesn't spin at exactly 300 RPM, nor is the speed aligned with the clock cycles of the CPU in terms of the bit positions on the disk. Even though the hardware will respond to deviations and adjust the motor speed if needed, there will still be ever so slight deviation from 300 RPM. Also, some drives are not tuned right, so they might be slightly off 300 to begin with!
Because of all of these problems, we use the 6522 normally with latching and BYTE-READY signals (not the BIT-ready signals in the decoder, since we don't have access to those through registers). Thus each byte comes in at roughly 26 cycles, but not exactly 26 cycles. We use BYTE-READY to get informed there is a new byte ready on port A, we read $1c01, clear the Carry (BYTE-READY signal) and wait for the next byte to be ready. Also, though we can decode in 4 speed zones, there are going to be slight differences in speed of the flux changes as the head moves in or out.
No wonder that Commodore simply states that $1c0f is "unused", or even completely ommits it from any documentation. Still, why not take a look what happens if we do use it.
For this test, I took one sector and filled it with a GRC code of $B7, or in binary "10110111", a fixed order of bits. See what I did? One, two, three. So reading $1c0f should ideally lead to getting $B7 as result each 26 cycles, right?
Let's look at three tests. With each random test you do the pattern for this one sector is going to be unique.
So what do we see here? Hex and Dec are just the number of bytes in a row in the sector read this way (reading the state on the pins of Port A every 26 CPU cycles) as the drive spins. For background: the drive is first looking for the right track, then for the right sector sync part. The routine that delivers the outcome above starts when the data sync block was found. It first waits normally for the BYTE-READY signal via bvc loop, then as that hits, the read byte is ignored (but LDA $1c01 still performed to signal DATA-TAKEN and then registers are set to let everything free-roam so we will only get the actual pin state when reading from $1c0f. This is timed so, that it is 26 cycles from the BYTE-READY signal before this routine kicks in. This means the first GRC bytes are still part of the DATA header block, after which the actual sector data comes (which should be $B7 for each byte loaded).
Back to the figure. "Binary" means the binary state of the byte depicted by the 8 pins of Port A. "GRC" is the hexadecimal representation of the binary pins, and then finally "bit too late" or "bit too early" means how many bit shift occurred when comparing what was read from $1c0f from what the actual byte stored on the disk (ie $B7). So for the first test shown, it so happened that 80 bytes read directly from the actual state of the pins on Port A every 26 cycles was indeed $B7. However, then we see 28 bytes incoming are reading the state too late by 1 bit (an extra shift has already happened). And we continue to read bits too late until we hit 1 byte read as $ff ! This means the state of all 8 pins at that time was '1'. Interesting. Next we still read 41 more bytes shifted 1 more to the right (too late) and again we hit a $ff, followed by a byte which seems to be 1 bit too early from the previous one before (not the $ff).
This pattern is similar in the other two tests. Clearly there are timing differences at play here. By essence, the CPU reads the state of the pins at a fixed 26 microseconds. But the magnetic fluxes passing the drive head and decoder that leads to the Port A pin states are clearly out of phase with this fixed timing of reading those Port A pins. What strikes me is the $ff states. With the right timing conditions, we apparently read the pins as '11111111'. I find this interesting, since I cannot find this explained so far in any of the literature I read about the hardware. I assume this is a state we can find at exactly the right time - like when 8 bits have been shifted onto the port A pins, and then the pins are "reset", before the next bits are shifted. So the state of all 8 pins goes to high at this instance. And we just happen to pick this up at exactly the right nanosecond? Might be. We know that a BIT CELL in the reading circuit consists of 4 decoder clock periods. If the magnetic flux during the first of the 4 parts is a change in flux, this will be counted as a "1" bit. Documentation says that this causes the decoder clock to be reset early after 2 more decoder clock cycles. It looks like a "1" bit will lead to a slightly earlier reset of the BIT CELL, than a "0". The BIT CELL mechanism requires new states to be active at least 2.5 microseconds before considered valid, according to the documentation. I then assume a shift in magnetic flux needs to stay the same for at least 2.5 microseconds, after which it is considered a 1 and the clock etc is reset, the bit state is shifted on the pins, and the decoder awaits the new flux change. If that is the case, then the rest of stuff that needs to happen would take 0,75 microseconds, as we know every 3,25 microseconds a new bit should be ready on the pins. In that time, the states on the other pins need to be shifted one place, the new state added and the system reset to wait for the next flux state situation.
I also wonder if a "1" bit causes all of this to be done quicker than a "0" state. Which means there might be slight differences in timing caused by the bit state in itself. Though that would not make sense, as during the writing process these flux changes should be done in constant timing, regardless of 0 or 1.
Anyway, I think I conclude this info for now. At this time I can see why Commodore considered this $1c0f register to be useless for practical purposes. Still, there may yet be use of it, such as exact timing of RPM speed of the drive for example. Although, this can also be done by other means. Just a matter of calculation and averages. No big deal.
In conclusion, the tests using $1c0f showed an interesting reset state on the port A pins between new bytes. If there is any practical use for register $1c0f remains to be seen.