View Single Post
Old April 28th, 2010   #1
sjackman
Task Group Member
 
Join Date: Sep 2006
Posts: 26
Default In-line device turnaround time

Hi,

When an in-line device receives a request packet, it must switch to receiving on the command ports within 132 μs. It then listens to the command ports, and if it sees a falling edge, it must switch to transmitting on the responder port within some period of time, t. What is the maximum time of t between the falling edge on the command port and transmitting on the responder port?

If it's a non-discovery packet with a break, the device must shorten the break by no more than 22 μs, so that's an upper limit on this time. For a discovery response packet without a break, any delay shortens the actual bit times of the first byte of the discovery response.

The first bytes of the discovery response are the seven 0xfe bytes of the preamble, the bit pattern on the line of a preamble frame is 00111111111 -- the first 0 is the start bit, the last two ones are the stop bits, for a total of 11 bits. So, the first low period (the two zeros) should be 8 μs. By how much can this be shortened? The most relevant timing I can find is 4.2.3 Bit Distortion, which states that the total bit time must not change by more than 75 ns (1.875%). That seems like very tight requirements on the turnaround time, t, which likely can't be met with firmware polling a PIO input. If the first low period is shortened by too much (causing the first zero bit to be lost), the receiver (that is, the controller) will see a 0xff byte rather than 0xfe. This may cause the controller to drop the packet.

The inline device may drop an entire preamble byte (section 7.5). That requires that the inline device turn around no sooner than 8 us and no later than 88 us.

I see two possible solutions.

1. All devices should ignore preamble bytes of 0xff as well as 0xfe. This would allow the first 8 μs low period of the discovery response to be shortened by an arbitrary amount.

2. Specify the timing requirement between receiving a falling edge on the command ports and transmitting on the responder port. This timing requirement would be

less than 75 ns or between 8 μs and 22 μs.
between 75 ns and 8 μs is disallowed.

The `less than 75 ns' option allows the turn around to be implemented in hardware and permit in line devices that do not drop a preamble byte. The `between 8 μs and 22 μs' option allows a software solution that does drop a preamble byte. Disallowing the range between 75 ns and 8 μs prevents distorting the bit times of the first preamble byte.

Cheers,
Shaun

4.2.1 Port Turnaround
After receiving an RDM request packet, the in-line device shall switch to receiving data at its
Command Ports, within 132μs of the end of the RDM packet.

After receiving an RDM request packet, the first port that is pulled to a low state for the start of a
BREAK becomes the active port. Note that this port may be the responder port, in which case the
in-line device shall return to forward data flow. Otherwise, data from the active port shall drive the
responder port and may drive all other command ports on the in-line device.
sjackman is offline   Reply With Quote