E1.20 RDM (Remote Device Management) Protocol Forums  

Go Back   E1.20 RDM (Remote Device Management) Protocol Forums > RDM Developer Forums > RDM Timing Discussion

RDM Timing Discussion Discussion and questions relating to the timing requirements of RDM.

Reply
 
Thread Tools Search this Thread Display Modes
Old May 26th, 2015   #1
berntd
Senior Member
 
Join Date: Jan 2008
Posts: 102
Default Responder Packet Timing issue

Hello

We have just ran into a problem with the timinig as follows:

Table 3-3 Resonder Packet Timing
Line 1 Receive: Inter-slot time Max = 2.1ms


Table 3-4
Line 1 Controller request -> responder response Max = 2.0ms

That cannot be met met by any receiver having a timeout of 2.1ms waiting for more bytes.

This has come up because a typical recveiver using DMA, has to use a timeout to consider a packet as completely received.


Regards
Bernt
berntd is offline   Reply With Quote
Old May 26th, 2015   #2
Nigel Worsley
Junior Member
 
Join Date: Jun 2006
Location: London
Posts: 13
Default

Looks like a very good reason not to use a timeout to detect end of packet! However if this is unavoidable then the specification can still be met:

Set the timeout to 1.05mS
When a timout is generated but the packet is incomplete then restart the timeout.
Two timeouts without anything being received in between is an inter slot timeout and can be processed accordingly.

My personal preferrence would be to handle all of the data in interrupts though.
Nigel Worsley is offline   Reply With Quote
Old May 26th, 2015   #3
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 379
Default

Are you implementing a controller, or a responder? The problem looks different depending on which type of device you're implementing.

In a responder, you can't really use a timeout to guarantee receiving a complete packet from the controller. Consider the case of a broadcast request:

DMX NSC
(~200us delay)
RDM Broadcast Request
(~200us delay)
DMX NSC

If you're counting on a 2ms idle to receive a complete RDM request, it will never happen and you'd miss the broadcast. Some RDM Controllers insert idle time on the wire after after a broadcast request, but it's not required.


Controllers on the other hand *can* use a timeout to detect the end of a response. In fact, I've done exactly that in one of my controllers. The controller can just wait however much time it needs to finish receiving the response before sending the next NSC or RDM packet.

However, I'd consider using a timeout to be poor practice since it can reduce line throughput substantially. Most RDM request/response pairs take between 3 and 4ms to complete (assuming well-behaved responders). Adding a >2ms timeout to this can really slow things done, especially during setup/initialization when you're trying to query everything from every responder and don't have active fades in progress (and thus can send NSC data infrequently).
ericthegeek is offline   Reply With Quote
Old May 26th, 2015   #4
berntd
Senior Member
 
Join Date: Jan 2008
Posts: 102
Default

Hi and thanks.

This is a responder based on a Linux non real time environment.
The OS is busy doing heaps of other stuff so we are not able to use interrupts as it takes too much cpu time and we can't get them fast enough when the OS is busy.

The coms stuff is therefore usually based on DMA control which is fully automatic, including the inter slot timout setting.
The reception of a RDM packet or a DMX packet is therefore fully automatic and no software intervention is necessary.

The bytes are reveiced until the timeout occurs and then a single interrupt occurs and this transfers the complete frame / buffer to the host.

We will investigate what else we can do but the possibilies are limited.

If we have to compromise one of the time 2 values, which one is preferable?



I should also mention that we are already checking the start code. Based on DMX or RDM, we use different timout values as the specs differ.


Regards
Bernt

Last edited by berntd; May 26th, 2015 at 05:33 PM.
berntd is offline   Reply With Quote
Old May 26th, 2015   #5
berntd
Senior Member
 
Join Date: Jan 2008
Posts: 102
Default

Hi again

I have just concluded the following:

The controller timing has to allow 2.8ms for a responder to answer.

Interpreting that, any controller must thus wait 2.8ms to recieve a response. If our responder thus uses a timeout of 2.1ms, it will work with all controllers.

What say you?

Regards
Bernt
berntd is offline   Reply With Quote
Old May 26th, 2015   #6
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 379
Default

Quote:
Originally Posted by berntd View Post
Interpreting that, any controller must thus wait 2.8ms to recieve a response. If our responder thus uses a timeout of 2.1ms, it will work with all controllers.
Unfortunately, no. The 2.8ms time includes time for transmission delay. Transparent Inline Devices (hubs, splitters, etc.) are allowed to delay the signal by up to 88us each way. The timing values given in the standard allow for up to 4 transparent inline devices, which gives a round-trip delay of 704us. See E1.20-2010 Section 4.2.2.

The other problem you're going to run into is that if you wait until the end of the response time to get the DMA interrupt and start processing the packet, you'll have no time left to process the request and send the response before the controller gives up and assumes the response was lost.
ericthegeek is offline   Reply With Quote
Old May 26th, 2015   #7
berntd
Senior Member
 
Join Date: Jan 2008
Posts: 102
Default

Quote:
Originally Posted by ericthegeek View Post
Unfortunately, no. The 2.8ms time includes time for transmission delay. Transparent Inline Devices (hubs, splitters, etc.) are allowed to delay the signal by up to 88us each way. The timing values given in the standard allow for up to 4 transparent inline devices, which gives a round-trip delay of 704us. See E1.20-2010 Section 4.2.2.

The other problem you're going to run into is that if you wait until the end of the response time to get the DMA interrupt and start processing the packet, you'll have no time left to process the request and send the response before the controller gives up and assumes the response was lost.

Eric, I know.

The responder has no chance of processing the packet due to OS lag. that is why we are responding with ACK_TIMER as already discussed in another thread here.

I am not sure how we can proceed to make this work.
I guess the original spec designers never factored in that some systems need to use interslot timeouot to determine packet length. It is really a very common way to do it and the processors even cater for it with register settings and a special interrupt.

Regards
Bernt
berntd is offline   Reply With Quote
Old May 26th, 2015   #8
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 379
Default

Quote:
Originally Posted by berntd View Post
This is a responder based on a Linux non real time environment.
The OS is busy doing heaps of other stuff so we are not able to use interrupts as it takes too much cpu time and we can't get them fast enough when the OS is busy.
My personal opinion is that to do RDM, you really need a software architecture that can respond to events on the wire within a few tens of microseconds. You might be able to stretch that to 100us. But if the software takes a millisecond or more you're going to have problems.

You might explore using a small/cheap microcontroller as an RDM Co-Processor to handle the low level timing. I haven't followed recent Real-Time Linux developments, but there may be a solution there too.

Quote:
Originally Posted by berntd View Post
If we have to compromise one of the time 2 values, which one is preferable?
The timings in the standard are strict for a reason, must stricter that traditional DMX. This was done to minimize the impact on the NSC data throughput. Controllers have a bit more flexibility, but responders are tightly constrained. If you compromise any of the timings, it may cause problems with some systems.

Someone who's more clever than I might be able to find a solution, but I'm foreseeing issues if you have to use a single DMX timeout for everything.
ericthegeek is offline   Reply With Quote
Old May 27th, 2015   #9
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 379
Default

Looks like we're both online and responding at the same time. Hopefully the discussion doesn't get too jumbled and hard to follow...

Quote:
Originally Posted by berntd View Post
The responder has no chance of processing the packet due to OS lag. that is why we are responding with ACK_TIMER as already discussed in another thread here.
Makes sense, I forgot about that discussion.

Quote:
Originally Posted by berntd View Post
I am not sure how we can proceed to make this work.
I guess the original spec designers never factored in that some systems need to use interslot timeouot to determine packet length. It is really a very common way to do it and the processors even cater for it with register settings and a special interrupt.
Non-Real-Time systems like Linux were discussed when the standard was written. But supporting an environment that might take >10ms, even >100ms to handle an event would have seriously compromised the performance of the protocol. This is especially true for responders, where a single poorly behaved responder can degrade the DMX/RDM throughput for everyone on the line. Up to that point, DMX had typically required interrupt response times in the 40us range, so keeping that requirement for RDM was not a major departure.


Can your DMA engine only interrupt on timeout, or can it also terminate on a byte count? Could you set it up to DMA in the first 3 bytes of the response following the break? Then, once you have the Message Length field you know how many more bytes to expect.

Does your system support chained DMA? The shortest valid request is 26 bytes. If you setup one DMA request for 3 bytes, then chain a request for 23 bytes. That would give you over 1ms (44us * 23 bytes) to take the first interrupt, read the Message Length field, then schedule a third chained DMA if the request is longer than 26 bytes.

What about timers? Does your system have hardware timer pins? You might be able to do something creative with them to detect different line conditions and timeouts.
ericthegeek is offline   Reply With Quote
Old May 27th, 2015   #10
berntd
Senior Member
 
Join Date: Jan 2008
Posts: 102
Default

Great minds think alike.

as I type this, our porgrammer is trying to split the reception into to DMA transfers.
First after 3 bytes to get the length and then another transfer for the rest.

Let's see how that goes.

Regards
Bernt
berntd is offline   Reply With Quote
Old May 27th, 2015   #11
sblair
Administrator
 
Join Date: Feb 2006
Posts: 438
Send a message via AIM to sblair Send a message via MSN to sblair
Default

Looks like I'm a little late to the party tonight but I was going to suggest the same thing it looks like you've already arrived at.

I've used DMA in some of my implementations and I do as suggested, I get the first few bytes so I can get the packet length info and then set my DMA routine to trigger me again once I've gotten them all and that has worked well. It is really the ONLY way you can use any kind of DMA routine.

As Eric mentioned, the critical factor in the timings was in maintaining DMX Null Start Code performance so using timeouts just really wouldn't have worked without a more serious impact.
__________________
Scott M. Blair
RDM Protocol Forums Admin
sblair is offline   Reply With Quote
Reply

Bookmarks

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Timing contraints driver switch from controller to responder mkoelman RDM Timing Discussion 2 March 18th, 2015 02:44 AM
3.2.1 Responder Packet Timings prwatE120 RDM Timing Discussion 6 May 23rd, 2009 09:32 AM
Packet Captures? jhuntington RDM General Implementation Discussion 0 March 4th, 2007 08:19 PM


All times are GMT -6. The time now is 02:13 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.