E1.20 RDM (Remote Device Management) Protocol Forums

E1.20 RDM (Remote Device Management) Protocol Forums (http://www.rdmprotocol.org/forums/index.php)
-   RDM General Implementation Discussion (http://www.rdmprotocol.org/forums/forumdisplay.php?f=4)
-   -   RDM Hub Implementation (http://www.rdmprotocol.org/forums/showthread.php?t=1248)

dj41354 March 21st, 2016 06:29 AM

RDM Hub Implementation
 
I've started looking at implementing a 1 channel in, 4 channels out RDM Hub. I'm going to try and keep it as simple as possible, ie. the Hub itself doesn't need to be discoverable, or able to be set up remotely.. I just need to be able to pass the data (in both directions) correctly.

I'm looking for any advice on how to architect this.. ie.. is the simplest to: 1) monitor the packets coming in (that are coming into the master port and are being received by the hub connected devices); 2) If a valid RDM packet gets detected, when it finishes, wait to see if one of the hub connected devices activates his transmitter to respond, and if so; 3) turn the transmitter on the master port (to the console) on; 4) monitor the response of the hub connected responder as it sends its packet to the console; 5) when the responders RDM packet finishes turn the busses back to the initial state & begin the process over.

Thanks in advance for any help with this..
Doug.

ericthegeek March 21st, 2016 10:41 AM

There's no "right" way to do it. There are successful splitters using all kinds of different architectures: Pure hardware, Pure software, Mixed hardware+software.

Your major constraints are:
Section 4.2.2: Data Delay of 88us or less
Section 4.2.4: Break Shortening of 22us or less
Section 7.5: DUB Preamble shortening

Getting a splitter mostly right is easy. The challenge is in the error handling, especially in the presence of noise and/or line drivers that create an edge on the 485 wire when they are enabled or disabled.

The other thing to be aware of is that splitters often expose problems in other RDM equipment. Splitters have to be relatively strict about timing. But many controllers and responders are forgiving about timing. That means that you can take a controller and responder that work fine together, put a splitter inline between them, and they stop working. It looks like the splitter is causing the problem, but it's really a timing problem in one of the other devices. Unfortunately, it's usually the splitter manufacturer that gets the nasty phone call when this happens. Good error indicators and logging can be invaluable in troubleshooting problems like this.

Finally, make sure you consider the case where you have multiple responders on the 485 bus connected to the "input" of your splitter. A common mistake is to AND all of the downstream ports together during the discovery response period, and drive that AND Gate's output back to the controller. But this means you're always driving the input bus, and you'll collide with any other responders on the input segment that are also trying to respond.

dj41354 April 14th, 2016 07:19 AM

Thanks Eric..
I don't quite understand what you're saying about the 4 downstream ports during discovery. Why would the bus always be driven?.. Wouldn't the bus only be driven if the downstream responders are actually responding?

dj41354 August 11th, 2016 05:50 AM

I've gotten to the point where I was able to fire up my "proof of concept" hub (1 port in 1 port out for now). It "mostly" works but definitely introduces failures on the DMXter4+Integrity testing rig (like 100 failures out of the full 4500 tests). How do I judge how much effect (ie failures with the hub in-line) is acceptable? I get that in a perfect world there wouldn't be any, but if I can't get there.. how do you form an opinion about if it's good enough?.. Thanks again.. Doug.

ericthegeek August 11th, 2016 09:58 AM

It would depend on which tests are failing. For example, some of the tests are looking at corner cases that will never happen in the real world and are mostly intended to test error handling in the responder. Other tests are critical and will have an impact on system operation if they can't pass. If you're comfortable posting a summary of what failed, we may be able to offer a more detailed response.

The automated test suites are great for testing protocol-level issues; For testing the bytes on the wire. In my experience though, most of the problems with splitters (and other inline devices) have to do with timing and error-handling, not protocol. For example, how does the splitter react when something responds too quickly or too slowly? How does it handle noise on the line or a response that happens when it's not expecting it? What do you do with a break that's interrupted by a 500ns wide pulse in the middle, or a responder that responds to broadcasts? When do you give up on a partial response: Do you enforce the Inter-Slot time, the Total Packet time, or both? Are the line drivers on every port enabled (and disabled) at the proper times?

To test for these issues, it usually requires a mis-behaving responder and a lot of time with an oscilloscope.

dj41354 August 15th, 2016 01:48 PM

The method I'm playing with involves monitoring the Busses on both sides of the Splitter with CarrierDetect circuitry so I can see is the Console is driving, or waiting for a RDM response.. additionally the same thing is happening on the Fixtures Bus so I can see if the Fixture is driving the FixtureBus.. ie responding. The RO & DI of the RS485 chips are crossed over, the Receiver Enables are permanently Enabled, and the Driver Enables are controlled to provide the correct direction of data flow. I made a single page flowchart of the logic that I was going to post (I have a PDF & A PNG) but I don't see how to post it...
Does this sound like I'm out of my mind?.. or do you think there's a chance.
The micro I'm using is a 64Mhz PIC and the logic to run the DriverEnables would be blistering fast. Also, the CarrierDetect circuit I'm using is blistering as well.. easily better than half a bit width... It seems like this should be workable..(?)

ericthegeek August 17th, 2016 11:08 AM

You should be able to attach PDFs to a post using the "Go Advanced" button under the "Quick Reply" window. You can also attach small images, but the resolution limit is usually too small to be useful. Just remember that everything you post is visible to the whole world.

When you say Carrier Detect, do you mean that you're detecting whether the 485 line is driven, vs. floating, and then making the direction decision based on that? That makes me a little nervous. You're basically relying on the analog characteristics of the signal. Can you reliably determine a weakly driven line (long cable, heavy loading, low power line driver, etc.) vs a strongly biased line? Make sure you test with with a variety of both 5V and 3.3V line drivers, and different terminate schemes.

I'm mostly a digital person, Bob's done some analog work on the 485 signal and may have better advice if he reads this. It's the real-world performance that matters. Try it, it might work!

dj41354 January 5th, 2017 05:06 PM

Hey Eric..
Been a while since the last post..
I'm going to the Plugfest this Jan..
I'm bringing a proof of concept PCB for my RDM hub along with a bunch of other gear..
Should be a good weekend!
Thanks again for your support with all this!
Doug Johnson / BlackTank

dj41354 May 8th, 2017 04:08 PM

Hi Eric..
I've finally got a next version pcb implementation at a RDM hub. At first blush it seems to work with our Pica cubes (I've just started debugging the hub) but when I use another fixture, I'm getting an error on the DMXter (that also is causing my Integrity to hang).. the error on the DMXter screen is.. "0002: DISC MUTE FRAMING ERR SB 1" (the fixture does not cause this error when connected directly to the Dmxter)
Can you tell from this error the nature of the problem my hub is causing?
Thanks..
Doug.

ericthegeek May 8th, 2017 07:55 PM

There was a framing error during the response to a "Mute" request. SB 1 means that the first stop bit (which should be high) was low.

Perhaps something is driving the line when it shouldn't, or isn't driving the line when it should. I've also seen this happen when an inline device enables its line driver part-way through a byte, thereby clipping off the start bit.

dj41354 May 9th, 2017 08:07 AM

So far to keep things simple, all my testing is being done with only one fixture attached to the hub. It turns out that the fixture causing the trouble is responding faster than 176usec.. it's in the range of 36usec to 60usec. This causes my hub a problem because I'm waiting for a full 130usec of silence from the controller before turning off the output port drivers & going back to checking ports for responses.. so I'm not turning around fast enough to get the beginning of the response from the fixture.

I guess my question is about the mechanism of determining when it's ok to stop driving the output ports (ie stop sending the controller data to output ports) and go back to the state where all ports (output ports & controller port) are in the "input" (listening) state to see who's gonna talk next. I did see in the spec where it says (for controller packet timing) "The average inter-slot time shall not exceed 76usec".. so it seems the hub should wait (at a minimum) for a full 76usec of silence before deciding the controller is done transmitting.. is waiting for a full 130usec of silence a good way to do this?

Thanks as always for your help..
Doug.

ericthegeek May 9th, 2017 08:58 AM

As you've found, a properly behaved splitter will often expose problems caused by misbehaving responders. But to the end-user it looks like the splitter is broken because "it works fine when the splitter's not installed".

This is what makes building a splitter such a challenge: Misbehaving devices, noisy 485 lines, and line driver enable/disable transients can leave you in a corner where there's no "correct" behavior.

In my intelligent splitter, I monitor for misbehaving devices and disable RDM on the port if something is talking when it shouldn't. At the end of a unicast request or response, there should be *no* activity on the line for 176us. If I see anything happening on a downstream port during that period I disable responses from that port for a few seconds and report a "jabber" event to the controller. That way the user sees "Jabbering responder found on port 4", rather than unexplained flakyness.

renatoml01 August 29th, 2019 09:33 AM

I need to develop a splitter with support for RDM, and in this post I saw something interesting.

ericthegeek, you said "architectures: Pure hardware", is it really possible? but how is the direction of sending the messages?

My intention is to make a DMX / RDM splitter from 1 input to 2 outputs using the minimum possible, is it possible to perform this task without a microcontroller?

NOTE sorry for my english, I'm from Brazil.

Thank you
Renato

ericthegeek August 29th, 2019 10:32 AM

Your English is better than my (non-existent) Portuguese. What you wrote was perfectly understandable, so you don't need to apologize.



Yes, it's possible to build a splitter without a microcontroller. You'll need a lot of one-shots and a state machine to handle the edge detection.


I strongly recommend including glitch filtering to improve noise immunity. The shortest valid high or low in a properly functioning RDM line is 4 microseconds. Thus, any pulse on the line that's shorter than 1 to 2 microseconds is noise and should be ignored. Doing this requires more one-shots and a delay line.


Given the complexity of the state machine, it's probably easier to do it in a PLD or small microcontroller, but it can be done without these if needed.

renatoml01 August 29th, 2019 10:55 AM

Thanks for the attention ericthegeek,

No doubt with the microcontroller will be easier, and I have no problem using them, I have some products that use PICs and could enjoy them on the production line.

So, starting with a microcontroller splitter, I had the following question:

Do I use the microcontroller to "buffer" the message and pass it on?
Or I use the microcontroller just to detect the start communication request and direct the controller communication to responder vice versa.

Trying to explain better.
Thinking about my current 485 transceiver that I use on my DMX splitter, I will leave it like this today and add the microcontroller just controlling the communication direction or I will read the input message with the microcontroller and it passes the message forward.

ericthegeek August 30th, 2019 08:58 AM

Both architectures are valid. You can decide which you prefer.

Quote:

Originally Posted by renatoml01 (Post 3299)
Do I use the microcontroller to "buffer" the message and pass it on?

Yes, this can work. The microcontroller has to watch all of the ports and determine which is active, then receive bytes and breaks and retransmit them on the other ports.

However, you can't buffer the entire RDM packet. The maximum delay that's allowed for a transparent inline device is 88 microseconds. This means you need to buffer a byte-at-a-time, not an entire packet.

Quote:

Originally Posted by renatoml01 (Post 3299)
Or I use the microcontroller just to detect the start communication request and direct the controller communication to responder vice versa.

Yes, this is possible too.

When you build a splitter, you also need to determine whether you want it to be protocol aware. It can receive and parse the packets and try to understand the protocol to make decisions. Or it can just look at the falling and rising edges. It's entirely up to you.

renatoml01 December 3rd, 2019 03:12 PM

Hi ericthegeek,

I came back with my project and I've done several tests to better understand RDM to implement in my DMX/RDM inline. From what I understand so far, I need to put my PIC reading the conversation and when I detect the controller start bit (CC 01) I direct the communication to the slaves. My question is, when I return the communication to the other direction? I drive and wait a while and come back or wait for the return bit (CC 01) from the slaves?

Thanks for the help, I need to finish this buffer as soon as possible, thank you very much for your help !!!

ericthegeek December 4th, 2019 08:22 AM

The maximum allowed delay for a transparent inline device is 88us, so you can't wait for the CC 01. By the time you receive the CC 01 there's already been too much delay. You typically have to watch for the falling edge of the response's break and set your direction based on that.


All times are GMT -6. The time now is 09:43 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.