View Single Post
Old May 22nd, 2009   #6
ericthegeek
Task Group Member
 
Join Date: Aug 2008
Posts: 375
Default

The problem with USB UARTs isn't with the inter-slot timing. The problem is meeting the line turnaround time requirements. At the end of a packet, you typically have 176 microseconds to turn the line around and be ready for whatever comes next (Table 3-2 lines 3 and 4).

Most operating systems don't give you very good control over USB transaction scheduling for bulk transactions. The OS's USB stack may send the turnaround right after the data, or it may delay it an arbitrary number of USB frames. You simply don't know, and the timing gets worse under heavy USB or CPU load.

The other problem I had was break generation. The "Start Break" and "End Break" commands were sent in two different USB transactions which could have an arbitrary amount of time between them. With a few tricks, I was able to guarantee the break was never less than 176us, but it would regularly exceed the 352us maximum by several ms (Table 3-1 line 1).

This issue came up during one of the RDM Public Review periods. As I recall, the conclusion was that accommodating the timing needs of the USB UARTs available at the time would have required such long turnaround and timeout periods that Null Start Code performance would be significantly impaired.

Disclaimer: I'm speaking solely from my experience and not making any grand "It can't be done" proclamations. It's been several years since I tried using a USB UART for RDM. It's fully possible that I drew the wrong conclusions or that the problems I encountered have been resolved in newer chips or newer software.

Last edited by ericthegeek; May 22nd, 2009 at 01:20 PM.
ericthegeek is offline   Reply With Quote