04-09-2019 06:34 PM
Hi,
I am using PSK modulation with Labview examples "PSK Tx.gvi" and "PSK Rx.gvi" together with USRP2901 to input audio from microphone and output it through speakers.
However, i noticed that If I remove the USRP and use my code alone, Tx and Rx in one VI, it works and if I speak into the microphone I can hear myself through the speakers.
But with the USRP I only hear noise, so i suspect the problem to be synchronization between Tx and Rx, which seems to not be needed when this Labview example is using a " PN sequence" as input data to modulate and then demodulate.
To solve this problem, I think the MT Demodulate block in the PSK Rx.gvi example has some role to play because it has an input " synchronization". But I do not know how to use it.
Am I supposed to add a training sequence in the PSK Tx.gvi example first and then use it as input?
The only training sequence I know is Barker code which consists of 1 and -1 (I assume those represent symbols in the case of BPSK, however for QPSK then, what should this 1 and -1 be replaced with?)
Also, what is the difference between the synchronization in this "MT demodulate" block and the concept of "symbol timing recovery" (that is also known as pulse aligning)?
Thank you!!
04-10-2019 06:29 AM
Could someone please help? any information can be useful!
Thanks!!
04-11-2019 04:16 AM
Hello,
Found some articles that might be of use.
- https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019V6lSAE&l=en-GB
- https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Y9YSAU&l=en-GB
Have you tried calibrating and testing the USRP in MAX?
Kind regards,
Michael
04-11-2019 05:21 AM - edited 04-11-2019 05:25 AM
@mdoherty wrote:
Hello,
Found some articles that might be of use.
- https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019V6lSAE&l=en-GB
- https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Y9YSAU&l=en-GB
Have you tried calibrating and testing the USRP in MAX?
Kind regards,
Michael
Hi, thanks for your reply and help!
I found the second article interesting, from what I understood I have to use the same "Open session'' and "Close session" of the USRP for both Transmitter and Receiver, to make both of them operate at the same time.
However I don't understand fully, the part where it says to create TX and RX VIs for streaming and merge them, could you please explain from what you understood in that part?
About calibration, I am only a beginner so I do not know how or what MAX is yet.
04-11-2019 05:39 AM
Hello,
Both of these articles should give you the info you need surrounding NI MAX.
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9KBSA0&l=en-GB
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9tuSAC&l=en-GB
You should be able to use NI MAX to make sure your hardware is working as intended.
I think it's meaning to take both VIs and merge them into one.
Kind regards,
Michael
04-11-2019 08:20 PM
Hi eriym,
My suggestion is to use a certain audio file like "xx.wav" to check your code when USRP is connected.
If the waveform test file has a sufficient play time, you would hear any sound from rx block of your code.
I believe that the synchronization issue is also important but we need to confirm the functionality of your code though you miss the initial sound due to the synchronization issue.
Thanks.
04-12-2019 03:49 AM
@Chan82 wrote:
Hi eriym,
My suggestion is to use a certain audio file like "xx.wav" to check your code when USRP is connected.
If the waveform test file has a sufficient play time, you would hear any sound from rx block of your code.
I believe that the synchronization issue is also important but we need to confirm the functionality of your code though you miss the initial sound due to the synchronization issue.
Thanks.
Hello, thank you for the reply!
If I understood well, verifying that the code is working with usrp can be done more easily with an audio file rather than speaking into the microphone "real time"?
Do you have an idea of how many seconds approximately should the test audio file be for me to hear (on the rx side) the sound in the file? In the order of mseconds?
I will do the tests today and get back to you
Thanks!!
04-12-2019 03:57 AM
Hi eriym,
If I understood well, verifying that the code is working with usrp can be done more easily with an audio file rather than speaking into the microphone "real time"?
--> Yes. Due to the synchronization issue, your voice may be lost if you speak a very short time. My suggestion is just one of the means to check your code. The key point is that you should check the receiving functionality though there is still a synchronization issue between tx and rx algorithms.
Do you have an idea of how many seconds approximately should the test audio file be for me to hear (on the rx side) the sound in the file? In the order of mseconds?
--> I think the time loss due to the sync will be a short time. Just a few seconds will be enough.
Thanks.
04-12-2019 01:04 PM - edited 04-12-2019 01:05 PM
Hi again,
I have tested my code with usrp and this time with a sound file instead of incoming sound from the microphone. I am using QPSK modulation in the transmitter and demodulation in the receiver.
When I run the transmitter and the receiver just right after it, I only hear nonsense as in every bit i hear the same sound) and I get a receive constellation graph that has 4 symbols, but kind of keeps on moving (This same thing happens when i test with realtime microphone audio) , and when the sound file in transmit side ends, i get nonsense on the receive constellation graph.
So does this mean my code for the sound, especially, is not working as it should? (I am assuming it is the sound part of the code that is not good because I am using the Labview examples for mod/demodulation "PSK Tx.gvi" and "PSK Rx.gvi" , I just added the sound input in TX and Output in Rx.
Thanks again for your help!
04-14-2019 08:09 PM
Hi eriym,
So does this mean my code for the sound, especially, is not working as it should? (I am assuming it is the sound part of the code that is not good because I am using the Labview examples for mod/demodulation "PSK Tx.gvi" and "PSK Rx.gvi" , I just added the sound input in TX and Output in Rx.
--> I don't understand exactly what you say "just added the sound input in Tx and Output in Rx". But checking sound input and output parts of your code will be a good starting point because you can receive normal constellation results on your Rx code. Did you check your sampling rate to play your voice at rx side?
"Sound File to Sound Output.vi" and "Generate Sound.vi" are both LabVIEW examples will be helpful for you to check your sound algorithm.
Thanks.