12

I am working on wireless communication system. We are using around 10 pairs of transmitter and receivers. We are using atmega16 microcontroller for encoding and decoding by USART ports.

Now we are able to transmit the data and receive the same in the receiver end, but there is a major problem, when we are finding the 2 transmitter data coming at the same time. The receiver is unable to get it due to interference.

Suppose one transmitter sends "SENDA" at the meanwhile another transmitter sends "GETTS", at that time the receiver can not receive proper data. As all the transmitters and receivers are working in same frequency, so this interference is occurring. How can I resolve this issue?

Nick Alexeev
  • 37,739
  • 17
  • 97
  • 230
user934070
  • 121
  • 1
  • 3

6 Answers6

14

Developing a workable RF communications protocol is apt to be a tricky but educational exercise. A few additional points to consider beyond what's been said:

  1. On some radio hardware, it takes a lot of power to listen for a signal. With many if not most small radios, listening for a second is going to take more energy than transmitting for a millisecond; on some radios, listening for a millisecond may take more energy than transmitting for a millisecond. If current consumption is not an issue, listening continuously is a lot simpler than listening intermittently; if current consumption is an issue, however, it may be necessary to listen intermittently. Probably not a good idea until you've managed to get something going with a continuous-listen protocol.
  2. Listen-before-transmit may be "polite", but it's nowhere near as useful with RF as with e.g. an Ethernet cable. Ethernet signalling is designed so that not only is it likely that a device which listens before transmitting will usually avoid a collision, but it's also designed so that a device whose transmission collides with that of another device is virtually guaranteed to notice. RF transmission offers no such promise. It's entirely possible that when P wants to transmit to Q, some other device X which is closer to Q than to P will be transmitting loud enough to prevent Q from hearing P's transmission, but not loud enough for P to notice. The only way P will know that Q might not have received his transmission is by the fact that P won't hear a response from Q.
  3. It's important to beware the consensus problem--much moreso with RF than with wire signalling. If a P sends to Q, it's possible that Q will hear P's transmission and send an acknowledgment, but P will for various reasons not hear that acknowledgment. It's thus necessary to be very careful to distinguish retransmissions from "new" transmissions.

    The consensus problem can be especially vexing if one is trying to save energy by powering down receivers when they're not needed. Suppose two P and Q are supposed to communicate once every 10 seconds, so they power up and P sends Q a packet. Q receives the packet, sends his acknowledgment, and--knowing that P won't be sending anything for almost ten seconds, powers down. If P doesn't get Q's acknowledgment, he'll retransmit; since Q is asleep, however, he won't hear P's retransmission. From Q's perspective, that wouldn't matter (he's already received his data), but it means no matter how many times P retries, he'll have no way of knowing Q got his packet (at least not until the next rendezvous in about ten seconds).

  4. It's entirely possible to have a situation in which node Q will be able to receive transmissions from P, but P will not be able to receive transmissions from Q. It may not be possible to usefully communicate in such scenarios, but one should at least endeavor to avoid doing anything obnoxious (like having P endlessly retry a transmission hundreds of retries per second)

As said, a workable RF communications protocol is apt to be a tricky exercise. Still, I'd expect you'll probably learn a lot from the experience.

supercat
  • 45,939
  • 2
  • 84
  • 143
8

If you're not using a standard protocol for this then you're going to have to design and implement one, e.g. a simple example:

  • before transmitting, a node should listen to check that the channel is free
  • if after transmitting a message no acknowledgement is received, the node should wait a random period of time and then try again, up a some maximum number of retries

So what happens is that you first try to avoid "jamming" by listening first, then if a jam still does occur you detect this via a lack of acknowledgment from the receiving node and then try again after a random delay - the two jamming transmitters will use different random delays, minimising the chance of a second collision.

Paul R
  • 752
  • 5
  • 16
  • 2
    A major limitation of collision avoidance is that there's no guarantee that prospective transmitters will be within receiving range of each other, even if they're both within receiving range of their intended target. – supercat Sep 09 '11 at 00:20
  • 1
    Collision avoidance just provides some improvement in channel utilization. You still have to do acknowledgements and retransmissions. The key is to wait a random time before retransmitting. – David Schwartz Sep 11 '11 at 02:21
  • The most important thing is that this is working in real time and also it is one way communication. so if we make it 2 way then it will make more interference. :( – user934070 Sep 13 '11 at 06:45
  • OK - it's never going to be robust or reliable then - you can listen before transmit but apart form that you will never have any guarantee that a transmission has actually been received. – Paul R Sep 13 '11 at 06:49
4

Here are two common options

1) Implement a Listen Before Talk (LBT) algorithm, which checks if there is transmission in progress before starting your own, and if so, backs off for a period of time. The period should contain a fixed length and a random length so that they don't all back off for the same period. Many standard radio protocols include this procedure, see ETSI EN 300-220-1.

2) Implement a beacon system where the transmissions are timed from the beacon. Each transmitter gets its own timing slot. You would normally use serial numbers in the devices to determine their slot, and have a system for determining who sends the beacon. Since this relies on all the transmitters having a different slot, it is not a good idea to leave it to the user to uniquely identify all the transmitters, unless you have a solid procedure for this.

Martin
  • 8,320
  • 1
  • 22
  • 30
  • As an aside, I think Part two could take advantage of CDMA if they know most stations will normally not need to transmit. – Kortuk Sep 08 '11 at 23:42
  • 1
    @Kortuk: I was under the impression that one of the advantages of CDMA is that--if the receiver can get synced up with the sender--the number of bit errors will go up as the number of simultaneous transmitters increases, but otherwise there is no "jamming" as such. – supercat Sep 09 '11 at 00:19
  • @supercat, I am under the impression that everyone is randomly allocating time slots. Most transmitters only talk occasionally so the chance of two talking at the same time is very small, but it occasionally happens and shows up as a small number of bit errors at that point. With interlacing and general ECC you can all but ignore this. That said, everyone has predetermined time slots based on a random number generator to ensure no two transmitters share the same space constantly and only occasionally meet. I can ask someone that knows for sure and have them chime in. – Kortuk Sep 09 '11 at 00:24
  • 1
    @Kortuk: That's what I used to think CDMA meant, but a number of sources, including the Wikipedia page, suggest that it refers to modulation at a speed higher than the bit rate; if the transmitter inverts its signal according to a pseudo-random bitstream, and the receiver does likewise and then filters resulting signal, the original signal can be recovered. Approaches based on pseudo-random time slot are useful, but I don't think CDMA is the right term. The biggest difficulty with such approaches is coordination. I really wish there were a widely-available high-resolution time signal. – supercat Sep 09 '11 at 03:43
  • 1
    @Kortuk: WWV kinda sorta works for synchronizing digital clocks and watches, but it takes a minute to send out a time signal. It would be so much nicer if there were widely-deployed time broadcasts which could be read in 10ms or less and was guaranteed to be within a certain small tolerance of WWV time in Colorado (meaning that at a location 1,000 miles away the locally-relayed time broadcasts should actually lead WWV by about 5ms). – supercat Sep 09 '11 at 03:48
  • @supercat. There is a time signal, in the UK at least: http://en.wikipedia.org/wiki/Time_from_NPL However, it takes longer than 10ms to read. But it might well be possible to synchronise the questioner's nodes. – Rocketmagnet Oct 25 '11 at 23:45
3

As I understand from the comments etc, power is not an issue, but communication speed is. So, here is my suggestion for a protocol.

Number all of the nodes, 0..n-1. Let each node know which number it is. Node 0 will be the master.

Every 15ms, node 0 sends a message : "0HELO".
1ms later, node 1 sends a message : "1DATA".
1ms later, node 2 sends a message : "2NICE".
1ms later, node 3 sends a message : "3 ". (This node has nothing to say)
1ms later, node 4 sends a message : "2CATS".
...
1ms later, node 9 sends a message : "9MICE".
Then there is a pause of 5ms.

The nodes always send their messages in their correct time slots, even if they have nothing to say. This way you are guaranteed 66Hz communication rate, with no collisions.

Rocketmagnet
  • 26,933
  • 17
  • 92
  • 177
2

RF communication with multiple asynchronous transmitters is a tricky problem. Lots of thought and engineering went into the 802.11 and 802.15 standards to get around these issues. If you have to ask here, then you should stick to off the shelf hardware that implements one of these standards.

Note that while both are useful and represent a lot of careful design, generally any real application will still have to implement a protocol stack above these standards. This would be WiFi and TCP above 802.11 and Zigbee or Microchip's WiWi or some others above 802.15.

Again, designing a multi-point radio network is way out of your league if you're asking such basic questions here. You'll just spend a lot of time and things will still not always work right.

The choice of 802.11 versus 802.15 depends mostly on your bandwidth and range requirements and available power. 802.15 is smaller, lower power, lower bandwidth, and smaller range. With the right higher level software, a 802.15 device can run a long time from batteries, whereas that's generally not true for 802.11.

Olin Lathrop
  • 310,974
  • 36
  • 428
  • 915
  • 2
    It all depends on the application. It is indeed quite difficult but at the same time much can be learned from the exercise. And the things he will learn are universal laws and not some implementation specific details. – jpc Sep 08 '11 at 13:13
  • 9
    "way out of your league" is a bit harsh. They're in over their head somewhat, and I've seen people in this kind of position waste a year on this type of problem... but that does not mean they can't take advice and get it to work. As jpc said, success here could mean a significant leap in understanding. If they were an employee of mine with this question (and I could afford the time for the lesson) I'd nudge them along and hope they learn something. – darron Sep 08 '11 at 14:27
  • 3
    It's a disservice when people come to this site looking for answers to learn about and solve a problem and leave forced (by upvotes) into a solution they weren't asking for or can't use. – Joel B Sep 08 '11 at 18:14
  • 1
    @JoelB upvotes do not force the acceptance of an answer. – Chris Stratton Dec 31 '11 at 23:05
1

I agree with listening before talking and the beacon system. But if you want to use a single channel for transmitting data at the same time you could use direct-sequence spread spectrum (DSSS) modulation technique. This could help you avoiding interference.

But for this you maybe need to buy a chip that implements it, for example Xbee (based on Zigbee). If you can't change your transmitter, you should stick to the other answers.

HzJavier
  • 135
  • 1
  • 1
  • 8
  • Thank you so much for suggestions. But, actually our major issue is that our system works in real time, so when and from where we will get a signal is totally unpredictable. Let me explain it in more detail. actually all the transmitters and receivers are placed withing their range i.e suppose their range is 100 metre then all are present inside 50meter so, any signal comes out from one transmitter can reach to every node and again any signal may come at any time. so how can we resolve this,,.. – user934070 Sep 09 '11 at 06:22
  • @User934070 Cell phone systems and wifi typically use a spread spectrum of some sort or at least technologies that follow the same basic concepts. Cell phones and laptops are just like you describe "when and from where we will get a signal is totally unpredictable" – Kellenjb Sep 09 '11 at 17:07