View Single Post
Old May 26th, 2015   #3
Task Group Member
Join Date: Aug 2008
Posts: 369

Are you implementing a controller, or a responder? The problem looks different depending on which type of device you're implementing.

In a responder, you can't really use a timeout to guarantee receiving a complete packet from the controller. Consider the case of a broadcast request:

(~200us delay)
RDM Broadcast Request
(~200us delay)

If you're counting on a 2ms idle to receive a complete RDM request, it will never happen and you'd miss the broadcast. Some RDM Controllers insert idle time on the wire after after a broadcast request, but it's not required.

Controllers on the other hand *can* use a timeout to detect the end of a response. In fact, I've done exactly that in one of my controllers. The controller can just wait however much time it needs to finish receiving the response before sending the next NSC or RDM packet.

However, I'd consider using a timeout to be poor practice since it can reduce line throughput substantially. Most RDM request/response pairs take between 3 and 4ms to complete (assuming well-behaved responders). Adding a >2ms timeout to this can really slow things done, especially during setup/initialization when you're trying to query everything from every responder and don't have active fades in progress (and thus can send NSC data infrequently).
ericthegeek is offline   Reply With Quote