E1.20 RDM (Remote Device Management) Protocol Forums  

Go Back   E1.20 RDM (Remote Device Management) Protocol Forums > RDM Developer Forums > RDM Timing Discussion

RDM Timing Discussion Discussion and questions relating to the timing requirements of RDM.

Reply
 
Thread Tools Search this Thread Display Modes
Old August 18th, 2011   #1
eldoMS
Junior Member
 
Join Date: Aug 2011
Posts: 8
Post Required handling of sequences of RDM broadcast cmds

Hello

When looking at table 3-2 line 6 a controller may continuously send a sequence of broadcast RDM SET commands with minimum spacing of 176 us.

In the broadcast situation the ACK_TIMER mechanism is not available to a responder to prevent any potential overrun situations since an answer is not part of the broadcast mechanism.

What would you suggest a responder to do when an overrun to a responder actually happens? Could a controller be made aware of this situation such that different handling is possible?

Greetings

Marc
eldoMS is offline   Reply With Quote
Old August 18th, 2011   #2
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 375
Default

The best solution is to make sure your responder can always handle back to back broadcast requests. That means either fully processing the request within 176 us, or saving the request for processing as a background (low priority) task.

Take the common case of an EEPROM write which can take several ms. I keep a copy of the DMX address in both RAM and EEPROM. When the Set DMX Address broadcast request happens, I immediately update the RAM copy, and set a flag that the EEPROM needs to be updated. Then, the lazy loop (low priority task that handles EEPROM writes, display updates, housekeeping, etc.) writes it to EEPROM whenever it gets around to it. If a GET DMX Address comes in before the EEPROM write is finished, it sends the address from RAM because the EEPROM may be busy or out of date.

There's a slight risk that if the system loses power between the request and the end of the EEPROM write that the address change could be lost, but the window is very small (a few ms) and the impact if it does occur is minimal.

If you can't do this, then you may have to drop the request. RDM Broadcasts are not guaranteed to be reliable, so you are allowed to drop the packet. However, this should be avoided if at all possible.

That's really all that a responder can do.


If you're building a controller, it's a good idea to wait 10 to 20ms after sending a broadcast request before you send another broadcast packet. This will allow any poorly implemented responders time to finish processing the previous broadcast.
ericthegeek is offline   Reply With Quote
Old August 18th, 2011   #3
nomis52
Task Group Member
 
Join Date: May 2010
Location: San Franciscio
Posts: 57
Default

This came up at the most recent plugfest. The responder tests now have an additional flag --broadcast_write_delay which you can use to adjust the delay between sending a broadcast set and the next RDM request.
nomis52 is offline   Reply With Quote
Old August 19th, 2011   #4
eldoMS
Junior Member
 
Join Date: Aug 2011
Posts: 8
Default

Hi Eric

Separate from EE memory writes, which I agree can be handled as you describe, there are other reasons a responder needs more time e.g. when it is a proxy or bridge/gateway device to where the data should be going.

Such a device can indeed store some or even many SET requests but at some point it can lead to an overrun in some practical device, that is the case I meant to be referring to also.

To the standard defining folks:

With regards to the 10 to 20 ms back-off as Eric is suggesting in practice for a controller to handle the overrun issues, are there plans on integrating in the standard a number as a suggested spec for this, or somehow a way informing a controller on responder capabilities on this point?

Leaving this point open and up to implementation can (and probably will) lead to incompatibilities between equipments that in a large part could be prevented by some agreement and future definition on this point.

A related aspect to this is also that the real life figure of 5 to 20 ms is often sufficient to handle these situations while the ACK_TIMER has a granularity of 100 ms. These large time chunks lead to extraordinary long delays that can be reduced a lot by allowing, in addition to the current definition, some finer grained timing.

Very interested to hear your opinions!

Greetings

Marc
eldoMS is offline   Reply With Quote
Old August 19th, 2011   #5
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 375
Default

Quote:
Originally Posted by eldoMS View Post
Such a device can indeed store some or even many SET requests but at some point it can lead to an overrun in some practical device, that is the case I meant to be referring to also.
One option when this occurs is for the proxy to queue up a response with a NACK:NR_PROXY_BUFFER_FULL. For example, if you've gotten too many Broadcast SET FACTORY_DEFAULTS, you can queue up a SET_RESPONSE FACTORY_DEFAULTS NACK:NR_PROXY_BUFFER_FULL. This gives the controller a way to know that the proxy has overflowed.

Fortunately, assuming a reasonable buffer size in the proxy, these conditions are unlikely to occur in the real world. The fraction of broadcast packets in most RDM systems is relatively small. It's quite rare to see more than a handful of broadcast requests back-to-back.

I mostly see broadcast used for:
DISCOVER_UNIQUE_BRANCH
UNMUTE
IDENTIFY OFF

The first two are only for discovery, and since proxies handle discovery on behalf of their proxied devices these can be handled immediately in the proxy and don't need to be queued.

You may see a Broadcast IDENTIFY OFF sent 3 or 4 times back-to-back by a controller (to mitigate lost or corrupt packets), but you won't see hundreds at once.

Quote:
Originally Posted by eldoMS View Post
With regards to the 10 to 20 ms back-off as Eric is suggesting in practice for a controller to handle the overrun issues, are there plans on integrating in the standard a number as a suggested spec for this, or somehow a way informing a controller on responder capabilities on this point?
The is really an implementation-specific decision. A wired proxy may need 10ms, but a wireless proxy could require 100ms or more. Whatever value got written into the standard, some would say it's too long, and some too short.

Anyone who's designing a controller that makes heavy use of broadcast packets will need to consider the real-world behavior of proxies. Fortunately, other than test equipment, this kind behavior is very rare.
ericthegeek is offline   Reply With Quote
Old August 19th, 2011   #6
sblair
Administrator
 
Join Date: Feb 2006
Posts: 433
Send a message via AIM to sblair Send a message via MSN to sblair
Default

As Eric said, NACK:NR_PROXY_BUFFER_FULL was specifically intended to allow Gateways/Bridges the ability to tell the Controller they have too many pending commands already to handle any more.
__________________
Scott M. Blair
RDM Protocol Forums Admin
sblair is offline   Reply With Quote
Old August 21st, 2011   #7
eldoMS
Junior Member
 
Join Date: Aug 2011
Posts: 8
Default

Hi Scott/Eric

Would this be NR_BUFFER_FULL (0x0007), or has a new NR_PROXY_BUFFER_FULL be added after E1.20?

Greetings

Marc
eldoMS is offline   Reply With Quote
Old August 21st, 2011   #8
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 375
Default

The NACK Reason Code for Proxy Buffer Full was added in the 2010 version of RDM.

The list of changes between E1.20-2006 and and E1.20-2010 is available here
http://tsp.plasa.org/tsp/documents/d...006_Errata.pdf
ericthegeek is offline   Reply With Quote
Old August 23rd, 2011   #9
eldoMS
Junior Member
 
Join Date: Aug 2011
Posts: 8
Default

Hi Scott/Eric,

What is then the exact difference between NR_BUFFER_FULL which states buffers or queue full, and NR_PROXY_BUFFER_FULL?

Is the difference only the fact that a responder queue in general would still be available (since no NR_BUFFER_FULL was indicated) and only the proxy-function of that responder is blocked by the NR_PROXY_BUFFER_FULL?

A certain implementation might also just use NR_BUFFER_FULL for the both OR'd as reasons together then?

Greetings

Marc
eldoMS is offline   Reply With Quote
Old August 23rd, 2011   #10
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 375
Default

NR_BUFFER_FULL is for when the target responder has no buffer space available.

NR_PROXY_BUFFER_FULL is for when the proxy, or some other in-line device between the controller and the target responder is out of buffer space.

So, in the system consisting of:
Console -----> RDM to RDM Proxy ------> Moving Light

If the console is sending a packet to the moving light, and the moving light can't handle the message, the moving light would send NR_BUFFER_FULL.

If the console is sending a packet to the moving light, and the proxy can't handle the message, the proxy would send NR_PROXY_BUFFER_FULL.
ericthegeek is offline   Reply With Quote
Reply

Bookmarks

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Is there a minimum required PID list for sub-devices? p_richart RDM Interpretation Questions 4 November 7th, 2008 01:04 PM
broadcast Disvovery Mute/Unute berntd RDM Interpretation Questions 2 April 8th, 2008 10:07 PM


All times are GMT -6. The time now is 09:13 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.