E1.20 RDM (Remote Device Management) Protocol Forums

E1.20 RDM (Remote Device Management) Protocol Forums (http://www.rdmprotocol.org/forums/index.php)
-   RDM Timing Discussion (http://www.rdmprotocol.org/forums/forumdisplay.php?f=7)
-   -   3.2.1 Responder Packet Timings (http://www.rdmprotocol.org/forums/showthread.php?t=37)

prwatE120 August 7th, 2006 08:06 PM

3.2.1 Responder Packet Timings
 
The last paragraph of section 3.2.1 on page 10 reads

"Responders may consider controller packets with interslot delays execeeding the maximum in line 3 of table 3-2 to be lost"

However, Table 3-2 is a table of packet spacing timings , and does not detail min/max of Interslot times.

Should not this paragraph refer to Line 1 of table 3-1, or better still, line 1 of Table 3-3, Responder Packet timing?

sblair August 7th, 2006 11:19 PM

*** Warning....this discussion is about to enter Task Group mode ***
Some of the following discussion and details are from earlier drafts. Just because I'm referencing for discussion does not mean they are what is in the FINAL approved standard. In all cases, it is the FINAL standard that is considered law.
************************************************

Just had to get that disclaimer out of the way.

Peter, what you've brought up appears to be a valid error. It certainly isn't making sense to me. However, after a stroll down memory lane it has been referencing the same table since it was changed to be a reference rather than a hard # in the text.

I had to dig back in draft versions to an early draft where it was actual hard numbers in the text.

From draft v1.2c on May 11, 2004:
Quote:

No single Inter-slot delay shall exceed 2.0mS.

Responders may consider controller packets with inter-slot delays of 2.1mS or greater to be lost.
It was after that draft that it was re-worked to point to the Table and Line # instead. Since that point it has always pointed to the "Controller Packet Spacings Table"

My guess at this point would be that it should have pointed to Table 3-3 Line 1.

Draft v1.3 had the following:
Quote:

Responders may consider controller packets with inter-slot delays exceeding the maximum in line 1 of Table 3-2 to be lost.
Draft v2.1 on Oct 25, 2004 changed it to what it currently is of being "line 3".

When we were editing the v2.1 draft at LDI is when we must have spotted the issue but instead changed the line # rather than the table #. It then passed through two following Public Reviews with no comments.

I would like to get confirmation from more Task Group members here to see if everyone is in agreement the current text is an error and then I'll forward to Karl to find out if/how to issue an Errata.

-dalc- May 8th, 2009 09:30 AM

inter-slot timeout
 
"Responders may consider controller packets with inter-slot delays of 2.1mS or greater to be lost"...

So, every USB/DMX PC interface which outputs packets directly (whithout esternal buffering & retrasmission circuit) may be cause packets to be lost, due the time scheduling of the operating system (for windows tipically >10ms).

In my RDM-responder firmware I set up the
inter-slot delay timeout equal to the packet timeout equal to 1S. Is it a valid solution?

ericthegeek May 12th, 2009 11:07 AM

Technically it's a valid solution. The key word is "may". A responder can wait as long as it wishes for the inter-slot timeout. Realistically though, no controller would generate packets like that because it would fail with many responders.

RDM has pretty tight timing restrictions. A device (controller or responder) which has scheduling delays in the 10ms range is unlikely to work reliably with RDM. You really need sub-millisecond level control to build an RDM device.

Typically, that means the USB UARTs from companies like FTDI and SiLabs won't work for RDM on their own. You need a microcontroller or PLD to generate more accurate timing.

dangeross May 22nd, 2009 03:33 AM

With the exception of the break and MAB timing the FTDI and SiLabs chips have large enough FIFO's that for most RDM packets the inter-slot time would be zero as long as your USB driver can push the data down the USB lpipe in large enough chunks.

ericthegeek May 22nd, 2009 12:06 PM

The problem with USB UARTs isn't with the inter-slot timing. The problem is meeting the line turnaround time requirements. At the end of a packet, you typically have 176 microseconds to turn the line around and be ready for whatever comes next (Table 3-2 lines 3 and 4).

Most operating systems don't give you very good control over USB transaction scheduling for bulk transactions. The OS's USB stack may send the turnaround right after the data, or it may delay it an arbitrary number of USB frames. You simply don't know, and the timing gets worse under heavy USB or CPU load.

The other problem I had was break generation. The "Start Break" and "End Break" commands were sent in two different USB transactions which could have an arbitrary amount of time between them. With a few tricks, I was able to guarantee the break was never less than 176us, but it would regularly exceed the 352us maximum by several ms (Table 3-1 line 1).

This issue came up during one of the RDM Public Review periods. As I recall, the conclusion was that accommodating the timing needs of the USB UARTs available at the time would have required such long turnaround and timeout periods that Null Start Code performance would be significantly impaired.

Disclaimer: I'm speaking solely from my experience and not making any grand "It can't be done" proclamations. It's been several years since I tried using a USB UART for RDM. It's fully possible that I drew the wrong conclusions or that the problems I encountered have been resolved in newer chips or newer software.

Nigel Worsley May 23rd, 2009 09:32 AM

Quote:

Originally Posted by ericthegeek (Post 1576)
Disclaimer: I'm speaking solely from my experience and not making any grand "It can't be done" proclamations. It's been several years since I tried using a USB UART for RDM. It's fully possible that I drew the wrong conclusions or that the problems I encountered have been resolved in newer chips or newer software.

Having used several different manufacturer's offerings ( but not for DMX ), I am pretty sure that you are still correct. These all needed custom drivers, which caused no end of problems, so I now use a processor with an integrated USB interface and rely on the standard device class drivers built in to Windows ( and probaby Linux and whatever the Mac OS currently calls itself as well ).

If there is any interest, I could easily knock up a USB to serial chip that is DMX and RDM aware to get around these problems while still being compatible with software that thinks it is driving a COM port. I might even be persuaded to make it open source :cool:


All times are GMT -6. The time now is 08:03 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.