2

I want to create a Virtual Keyboard that can catch whatever key you 'speak' and send the keystroke to the active application. The Virtual Keyboard part and linking it to Speech Recognition will be done easily but the problem I am suffering is that the Speech Recognition is inefficient!

For example I say 'c' and it takes it as 'v' or something. This is extremely irritating and though it works with the microphone on my Logitech headset, it still doesn't recognize what I am saying sometimes... It's worse with the default microphone on my Lenovo laptop.

But it is weird that the Google speech recognition thing on the Google Search Engine works perfectly, with or without mike... Why is that?

Is there a way to improve my program?

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Speech.Synthesis;
using System.Speech.Recognition;
using System.Threading;

namespace TextToVoice
{
    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }
        SpeechSynthesizer sSynth = new SpeechSynthesizer();
        PromptBuilder pBuilder = new PromptBuilder();
        SpeechRecognitionEngine sRecognize = new SpeechRecognitionEngine();

        private void Form1_Load(object sender, EventArgs e)
        {

        }

        private void button1_Click(object sender, EventArgs e)
        {
            pBuilder.ClearContent();
            pBuilder.AppendText(textBox1.Text);
            sSynth.Speak(pBuilder);




        }

        private void button2_Click(object sender, EventArgs e)
        {
            button2.Enabled = false;
            button3.Enabled = true;
            Choices sList = new Choices();
            sList.Add(new string[] { "one", "2", "3", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"});
            Grammar gr = new Grammar(new GrammarBuilder(sList));
            try
            {
                sRecognize.RequestRecognizerUpdate();
                sRecognize.LoadGrammar(gr);
                sRecognize.SpeechRecognized += sRecognize_SpeechRecognized;
                sRecognize.SetInputToDefaultAudioDevice();
                sRecognize.RecognizeAsync(RecognizeMode.Multiple);


            }

            catch
            {
                return;
            }
        }

        void sRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {

            if (e.Result.Text == "exit")
            {
                Application.Exit();
            }
            else if (e.Result.Text == "how are you")
            {
                sSynth.Speak("I am fine");
                textBox1.Text = "";
                pBuilder.ClearContent();
            }
            else if (e.Result.Text == "hey")
            {
                sSynth.Speak("Hello sir");
                textBox1.Text = "";

            }
            else
            {
                textBox1.Text = textBox1.Text + " " + e.Result.Text;

            }

            }

        private void button3_Click(object sender, EventArgs e)
        {
            button2.Enabled = true;
            button3.Enabled = false;
        }
        }
    }

Okay an edit.. Basically is there a way to create a program that can improve the application's UNDERSTANDING of what I am speaking? Like Windows Speech Recognition does by making us read text and then understanding how I speak words or whatever, except that that is too tedious :P

SinByCos
  • 121
  • 1
  • 1
  • 4
  • 3
    C# is a programming language. SpeechRecognitionEngine is a part of the .NET library. – Andrew Hoffman Nov 20 '14 at 17:56
  • 2
    B, C, D, E, G, P, T, V, Z...a lot of single letters sound very similar in English (*pace* UK readers who say "zed"). You'll have much better luck with words than with single letters. – Kyralessa Nov 20 '14 at 18:17
  • Pronounced English letters can be ambiguous, and often difficult even for humans to distinguish. Consider using a [ICAO Spelling Alphabet](http://en.wikipedia.org/wiki/NATO_phonetic_alphabet) (alpha, bravo, charlie...) and recognition accuracy is bound to go up. – dbkk Nov 20 '14 at 18:45
  • Yeah but I am not creating a type-what-i-say program. I want it to just understand the single letters I say and then use them as keystrokes in my Virtual Keyboard... – SinByCos Nov 21 '14 at 11:50
  • Ooooh... dbkk, I think I get you... I'll look at it and try it out. :) – SinByCos Nov 21 '14 at 11:56
  • Maybe you want to use a [`DictationGrammar`](https://msdn.microsoft.com/en-us/library/system.speech.recognition.dictationgrammar.aspx), specifically the `grammar:dictation#spelling` variant? I actually have no idea, but I just started looking at this stuff. – drzaus Jun 08 '16 at 15:02

1 Answers1

3

The quality of speech recognition depends on many parameters:

  • Microphone: as you noted, a headset microphone is better than the one in your laptop. Studio microphones will give the best results, I imagine.

  • Environment: you'll have hard time making speech recognition work in a noisy environment compared to a quiet one (ideally a studio).

  • Pronunciation: for instance, I'm not a native English speaker and have a poor accent, and when I tried to use Google's speech recognition, half of the time, Google understands something else. At the same time, it understands practically everything when my girlfriend is speaking.

  • Dictionary: if you pronounce words which actually exist, speech recognition engine may improve its process by using a dictionary of words. For example, if you pronounce "elephant", it has a good chance of getting it right. If you tell "eglefont", none of the engines will be able to write the word.

  • Contextual subsets: if the dictionary is bound to a context, it will be easier for the engine to understand you. For example, asking the engine to type what you say is much more difficult than asking it to understand just four commands: "start", "stop", "move left" and "move right".

While the first three points may help improving the recognition in general, I think you should first focus on the last two points.

What is happening, I imagine, is that the recognition engine in your application is trying to understand words, and is unable to do that because you are pronouncing only letters. I don't think the fact that you've specified just letters is relevant: it may be that the Grammar is still interpreting this as words, although Windows' Text to Speech understands that single letters are actually letters, not words.

Since speech recognition in Windows can also be trained for a specific voice/pronunciation, there might be a way to train it for specific words (in your case, single letters). This being said, I haven't used this part of .NET Framework, so I don't know how much is it customizable.

Also related: Jeff Atwood, Whatever Happened to Voice Recognition?

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • Okay.. I think I get it... I liked Google because it understood whatever "I" said while windows didn't. – SinByCos Nov 21 '14 at 11:51
  • But anyway I will try to create a program which will understand that what I speak are letters and not words... Maybe there is a way to edit the pronunciation and all according to where you are from. For example I am from India and somehow Google understands each and everything I speak, microphone or not, while my application doesn't. – SinByCos Nov 21 '14 at 11:53
  • @AlexStone: have you tried to speak *words* using the .NET Framework's speech recognition? This may help to compare it with Google's one. Also, do you use a recognition profile in Windows and have you trained it? – Arseni Mourzenko Nov 21 '14 at 12:00
  • I haven't tried much.. All I have done is created this code and tried both words and alphabets but they both are inaccurate... I HAVE used the Windows thing in the past but it was terrible so... :P – SinByCos Nov 21 '14 at 12:05
  • @AlexStone: trying Google's speech recognition and the one shipped with Windows side by side may help to understand the issue. If you train your profile and speech recognition in a context of a text editor still has a lot of errors, while in the same environment and with the same microphone, you achieve good results with Google's one, unfortunately the only thing you can do in your code is to switch to another speech recognition engine. Google might have [an API](https://www.google.com/intl/en/chrome/demos/speech.html), by the way. – Arseni Mourzenko Nov 21 '14 at 12:17
  • Hmmm... I'll have to see.. Thanks though. – SinByCos Nov 22 '14 at 01:21