|
RDM Timing Discussion Discussion and questions relating to the timing requirements of RDM. |
|
Thread Tools | Search this Thread | Display Modes |
May 26th, 2015 | #1 |
Senior Member
Join Date: Jan 2008
Posts: 102
|
Responder Packet Timing issue
Hello
We have just ran into a problem with the timinig as follows: Table 3-3 Resonder Packet Timing Line 1 Receive: Inter-slot time Max = 2.1ms Table 3-4 Line 1 Controller request -> responder response Max = 2.0ms That cannot be met met by any receiver having a timeout of 2.1ms waiting for more bytes. This has come up because a typical recveiver using DMA, has to use a timeout to consider a packet as completely received. Regards Bernt |
May 26th, 2015 | #2 |
Junior Member
Join Date: Jun 2006
Location: London
Posts: 13
|
Looks like a very good reason not to use a timeout to detect end of packet! However if this is unavoidable then the specification can still be met:
Set the timeout to 1.05mS When a timout is generated but the packet is incomplete then restart the timeout. Two timeouts without anything being received in between is an inter slot timeout and can be processed accordingly. My personal preferrence would be to handle all of the data in interrupts though. |
May 26th, 2015 | #3 |
Task Group Member
Join Date: Aug 2008
Posts: 379
|
Are you implementing a controller, or a responder? The problem looks different depending on which type of device you're implementing.
In a responder, you can't really use a timeout to guarantee receiving a complete packet from the controller. Consider the case of a broadcast request: DMX NSC (~200us delay) RDM Broadcast Request (~200us delay) DMX NSC If you're counting on a 2ms idle to receive a complete RDM request, it will never happen and you'd miss the broadcast. Some RDM Controllers insert idle time on the wire after after a broadcast request, but it's not required. Controllers on the other hand *can* use a timeout to detect the end of a response. In fact, I've done exactly that in one of my controllers. The controller can just wait however much time it needs to finish receiving the response before sending the next NSC or RDM packet. However, I'd consider using a timeout to be poor practice since it can reduce line throughput substantially. Most RDM request/response pairs take between 3 and 4ms to complete (assuming well-behaved responders). Adding a >2ms timeout to this can really slow things done, especially during setup/initialization when you're trying to query everything from every responder and don't have active fades in progress (and thus can send NSC data infrequently). |
May 26th, 2015 | #4 |
Senior Member
Join Date: Jan 2008
Posts: 102
|
Hi and thanks.
This is a responder based on a Linux non real time environment. The OS is busy doing heaps of other stuff so we are not able to use interrupts as it takes too much cpu time and we can't get them fast enough when the OS is busy. The coms stuff is therefore usually based on DMA control which is fully automatic, including the inter slot timout setting. The reception of a RDM packet or a DMX packet is therefore fully automatic and no software intervention is necessary. The bytes are reveiced until the timeout occurs and then a single interrupt occurs and this transfers the complete frame / buffer to the host. We will investigate what else we can do but the possibilies are limited. If we have to compromise one of the time 2 values, which one is preferable? I should also mention that we are already checking the start code. Based on DMX or RDM, we use different timout values as the specs differ. Regards Bernt Last edited by berntd; May 26th, 2015 at 05:33 PM. |
May 26th, 2015 | #5 |
Senior Member
Join Date: Jan 2008
Posts: 102
|
Hi again
I have just concluded the following: The controller timing has to allow 2.8ms for a responder to answer. Interpreting that, any controller must thus wait 2.8ms to recieve a response. If our responder thus uses a timeout of 2.1ms, it will work with all controllers. What say you? Regards Bernt |
May 26th, 2015 | #6 | |
Task Group Member
Join Date: Aug 2008
Posts: 379
|
Quote:
The other problem you're going to run into is that if you wait until the end of the response time to get the DMA interrupt and start processing the packet, you'll have no time left to process the request and send the response before the controller gives up and assumes the response was lost. |
|
May 26th, 2015 | #7 | |
Senior Member
Join Date: Jan 2008
Posts: 102
|
Quote:
Eric, I know. The responder has no chance of processing the packet due to OS lag. that is why we are responding with ACK_TIMER as already discussed in another thread here. I am not sure how we can proceed to make this work. I guess the original spec designers never factored in that some systems need to use interslot timeouot to determine packet length. It is really a very common way to do it and the processors even cater for it with register settings and a special interrupt. Regards Bernt |
|
May 26th, 2015 | #8 | ||
Task Group Member
Join Date: Aug 2008
Posts: 379
|
Quote:
You might explore using a small/cheap microcontroller as an RDM Co-Processor to handle the low level timing. I haven't followed recent Real-Time Linux developments, but there may be a solution there too. Quote:
Someone who's more clever than I might be able to find a solution, but I'm foreseeing issues if you have to use a single DMX timeout for everything. |
||
May 27th, 2015 | #9 | ||
Task Group Member
Join Date: Aug 2008
Posts: 379
|
Looks like we're both online and responding at the same time. Hopefully the discussion doesn't get too jumbled and hard to follow...
Quote:
Quote:
Can your DMA engine only interrupt on timeout, or can it also terminate on a byte count? Could you set it up to DMA in the first 3 bytes of the response following the break? Then, once you have the Message Length field you know how many more bytes to expect. Does your system support chained DMA? The shortest valid request is 26 bytes. If you setup one DMA request for 3 bytes, then chain a request for 23 bytes. That would give you over 1ms (44us * 23 bytes) to take the first interrupt, read the Message Length field, then schedule a third chained DMA if the request is longer than 26 bytes. What about timers? Does your system have hardware timer pins? You might be able to do something creative with them to detect different line conditions and timeouts. |
||
May 27th, 2015 | #10 |
Senior Member
Join Date: Jan 2008
Posts: 102
|
Great minds think alike.
as I type this, our porgrammer is trying to split the reception into to DMA transfers. First after 3 bytes to get the length and then another transfer for the rest. Let's see how that goes. Regards Bernt |
May 27th, 2015 | #11 |
Administrator
|
Looks like I'm a little late to the party tonight but I was going to suggest the same thing it looks like you've already arrived at.
I've used DMA in some of my implementations and I do as suggested, I get the first few bytes so I can get the packet length info and then set my DMA routine to trigger me again once I've gotten them all and that has worked well. It is really the ONLY way you can use any kind of DMA routine. As Eric mentioned, the critical factor in the timings was in maintaining DMX Null Start Code performance so using timeouts just really wouldn't have worked without a more serious impact.
__________________
Scott M. Blair RDM Protocol Forums Admin |
Bookmarks |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Timing contraints driver switch from controller to responder | mkoelman | RDM Timing Discussion | 2 | March 18th, 2015 02:44 AM |
3.2.1 Responder Packet Timings | prwatE120 | RDM Timing Discussion | 6 | May 23rd, 2009 09:32 AM |
Packet Captures? | jhuntington | RDM General Implementation Discussion | 0 | March 4th, 2007 08:19 PM |