Difference between revisions of "Speech Recognition"
m (→Basics) |
(added MS SDK extract section) |
||
Line 13: | Line 13: | ||
The SAPI overviews are here: | The SAPI overviews are here: | ||
* [http://msdn.microsoft.com/en-us/library/aa911607.aspx SAPI5.0 overview] | *[http://msdn.microsoft.com/en-us/library/aa911607.aspx SAPI5.0 overview] | ||
* [http://msdn.microsoft.com/en-us/library/ms720151(VS.85).aspx SAPI5.3 overview] | *[http://msdn.microsoft.com/en-us/library/ms720151(VS.85).aspx SAPI5.3 overview] | ||
<font color="red">To do - SAPI5.1 on the MS site?</font> | <font color="red">To do - SAPI5.1 on the MS site?</font> | ||
<br>Also see: | <br>Also see: | ||
*[http://en.wikipedia.org/wiki/Microsoft_Speech_API http://en.wikipedia.org/wiki/Microsoft_Speech_API] | *[http://en.wikipedia.org/wiki/Microsoft_Speech_API http://en.wikipedia.org/wiki/Microsoft_Speech_API] | ||
*[http://support.microsoft.com/kb/306901/ http://support.microsoft.com/kb/306901/] | *[http://support.microsoft.com/kb/306901/ http://support.microsoft.com/kb/306901/] | ||
* [http://support.microsoft.com/kb/306537/EN-US/ http://support.microsoft.com/kb/306537/EN-US/] | *[http://support.microsoft.com/kb/306537/EN-US/ http://support.microsoft.com/kb/306537/EN-US/] | ||
When your program runs, you say something into the mic, pause, and the speech callback predicate is called. You then extract what was spoken as a string_list, which you pass onto your own predicate to process. <BR> | When your program runs, you say something into the mic, pause, and the speech callback predicate is called. You then extract what was spoken as a string_list, which you pass onto your own predicate to process. <BR> | ||
<font color="red">To do - Other data can be extracted?<BR> | <font color="red">To do - Other data can be extracted?<BR> | ||
</font><BR><BR> | </font><BR><BR> | ||
=== Extract from MS Speech SDK 5.1 Help File=== | |||
<i> | |||
<font color="blue"> | |||
The Microsoft Speech API (SAPI) is a software layer used by speech-enabled applications to communicate with Speech Recognition (SR) engines and Text-to-Speech (TTS) engines. SAPI includes an Application Programming Interface (API) and a Device Driver Interface (DDI). Applications communicate with SAPI using the API layer and speech engines communicate with SAPI using the DDI layer. | |||
A speech-enabled application and an SR engine do not directly communicate with each other – all communication is done using SAPI. SAPI controls a number of aspects of a speech system, such as: | |||
*Controlling audio input, whether from a microphone, files, or a custom audio source; and converting audio data to a valid engine format. | |||
*Loading grammar files, whether dynamically created or created from memory, URL or file; and resolving grammar imports and grammar editing. | |||
*Compiling standard SAPI XML grammar format, and conversion of custom grammar formats, and parsing semantic tags in results. | |||
*Sharing of recognition across multiple applications using the shared engine, as well as all marshaling between engine and applications. | |||
*Returning results and other information back to the application and interacting with its message loop or other notification method. Using these methods, an engine can have a much simpler threading model than in SAPI 4, because SAPI 5 does much of the thread handling. | |||
*Storing audio and serializing results for later analysis. | |||
*Ensuring that applications do not cause errors – preventing applications from calling the engine with invalid parameters, and dealing with applications hanging or crashing. | |||
The SR engine performs the following tasks: | |||
*Uses SAPI grammar interfaces and loads dictation. | |||
*Performs recognition. | |||
*Polls SAPI for information about grammar and state changes. | |||
*Generates recognitions and other events to provide information to the application. <BR><BR> | |||
</font></i> | |||
===Setting up your project=== | ===Setting up your project=== | ||
Revision as of 13:03, 29 January 2010
Basics
Microsoft provides a Speech API - SAPI - which covers both Speech Recognition and Speech Synthesis. Most of this page deals with Speech Recognition (SR). It is provided as a DLL, which works very nicely (thanks to Thomas and co.) with Visual Prolog 7.2, and with a defined grammar. It is surprisingly accurate and responsive, even with a cheap microphone. Microsoft has produced SAPI recognizers for 4-5 languages including English. PDC has produced a SAPI recognizer for Danish: Dictus. There may also be SAPI recognizers for other languages (but other than MS and PDC there don't seem to be any).
SAPI is the link between programs and Speech Recognition Engines, much like ODBC is the link between programs and SQL databases.
Generally, the COM/DLL is provided with Windows (but see below - "Availability"). With a copy of the DLL in your project folder, it's easy to import it into your project. It is not necessary for it to be there at run time (i.e. there's no need to distribute it with your app to end users).
If you need it (see "availability" below) you can download the SDK here: MS Download site; you need the 68Mb file SpeechSDK51.exe near the bottom of the page.
The SAPI overviews are here:
To do - SAPI5.1 on the MS site?
Also see:
- http://en.wikipedia.org/wiki/Microsoft_Speech_API
- http://support.microsoft.com/kb/306901/
- http://support.microsoft.com/kb/306537/EN-US/
When your program runs, you say something into the mic, pause, and the speech callback predicate is called. You then extract what was spoken as a string_list, which you pass onto your own predicate to process.
To do - Other data can be extracted?
Extract from MS Speech SDK 5.1 Help File
The Microsoft Speech API (SAPI) is a software layer used by speech-enabled applications to communicate with Speech Recognition (SR) engines and Text-to-Speech (TTS) engines. SAPI includes an Application Programming Interface (API) and a Device Driver Interface (DDI). Applications communicate with SAPI using the API layer and speech engines communicate with SAPI using the DDI layer.
A speech-enabled application and an SR engine do not directly communicate with each other – all communication is done using SAPI. SAPI controls a number of aspects of a speech system, such as:
- Controlling audio input, whether from a microphone, files, or a custom audio source; and converting audio data to a valid engine format.
- Loading grammar files, whether dynamically created or created from memory, URL or file; and resolving grammar imports and grammar editing.
- Compiling standard SAPI XML grammar format, and conversion of custom grammar formats, and parsing semantic tags in results.
- Sharing of recognition across multiple applications using the shared engine, as well as all marshaling between engine and applications.
- Returning results and other information back to the application and interacting with its message loop or other notification method. Using these methods, an engine can have a much simpler threading model than in SAPI 4, because SAPI 5 does much of the thread handling.
- Storing audio and serializing results for later analysis.
- Ensuring that applications do not cause errors – preventing applications from calling the engine with invalid parameters, and dealing with applications hanging or crashing.
The SR engine performs the following tasks:
- Uses SAPI grammar interfaces and loads dictation.
- Performs recognition.
- Polls SAPI for information about grammar and state changes.
- Generates recognitions and other events to provide information to the application.
Setting up your project
When you create a new VIP project and try to generate the prolog "glue" to the SAPI COM, the code generated is not perfect. You will have noticed in the forum that this is not a trivial task, and everyone seems to write their COMs differently. So Thomas has provided the tidied-up code here:
So first generate the faulty COM code using the IDE (at the point where you add the DLL to the project) so that all the folders and classes are created, and then overwrite these files with the correct code in Windows explorer.
Dictation versus Commands
Briefly, SAPI SR works in two modes. One is dictation - a "free form" mode for dictating letters etc. SR isn't great in this mode. ToDo - is there a training mode?
The second mode is "command mode" whereby SAPI is provided with a grammar that makes it easier for it to understand, since there is a restricted limited number of words to work with. Results are much better than in dictation mode. If you give it a grammar such as:
- "turn red"
- "turn blue"
it will totally ignore the command if you say "turn yellow" - the callback function is not called at all. If you say "turn bed", it will probably return the "turn red", or nothing at all.
The grammar file required (if you don't want dictation) is an XML file. The help file for writing an XML grammar file is provided in the download above. The rules for writing the XML file are straightforward, and for simple grammars the XML is easy to write. But if you get something slightly wrong (even though the XML structure itself is correct), you will get an exception when your program loads and compiles it(the SAPI engine compiles the XML).
The grammar file can have a rule saying "dictation" is expected as part of a rule.
Grammar Format Tags
The XML Grammar Format Tags are described the SDK Manual. Briefly these are:
<GRAMMAR> - the file starts with this, and ends with </GRAMMAR>; <RULE> - the tag for defining sentences (a list of other tags). A RULE parent must always be <GRAMMAR>; <DICTATION> - for free-form dictation; <LIST> or <L> - children can be lists of PHRASEs for example; <PHRASE> or <P> - specifying the words to be recognised; <OPT> or <O> - specifying words that might be said (i.e optional); <RULEREF> - for recursively calling other RULEs; <WILDCARD> - to allow recognition of some phrases without failing due to irrelevant, or ignorable words; <RESOURCE> - to store arbitrary string data on rules (e.g. for use by a CFG Interpreter); <TEXTBUFFER> - used for applications needing to integrate a dynamic text box or text selection with a voice command.
Many of the tags can have children which are other tags, but not all, and equally some tags are restricted as to their parent tag. <DICTATION> and <RULEREF> can have no children. <RULE> can only have <GRAMMAR> as a parent. <GRAMMAR> must have one or more <RULE>s as children, and no other type (except <ID> which is discussed below).
Only the <PHRASE> and <OPT> tags contain words/phrases that will be recognisable spoken words
SAPI.DLL Availability/Versions
- Windows Vista and Windows 7: SAPI 5.3 is part of Windows Vista and Windows 7, but it will only work for the languages that Microsoft supports (and Danish with PDC's engine).
- Windows XP: On XP you will get SAPI 5.1 with Office 2003 (but not 2007), and you can get it as part of the SDK download mentioned above. And you can get it as a installer merge module to merge into an installer you create yourself.
- Notes:
- You cannot (do not!) install SAPI 5.1 on a Vista or Windows 7.
- A program that uses SAPI 5.1 can also (without any changes) use SAPI 5.3.
- The SAPI import provided by PDC is actually based on SAPI 5.3 (but it probably does not expose anything that is not also in 5.1). SAPI 5.3 is a conservative extension of SAPI 5.1. It's forwards compatible: a program that works with 5.1 will also work with 5.3, but not necessarily the other way around.
- You cannot (do not!) install SAPI 5.1 on a Vista or Windows 7.
Examples
Here are a few examples of grammars. These are the actual contents as would be stored in an XML file.
- Example 1 - Recognises the word "hello" only.
<GRAMMAR> <DEFINE> <ID NAME="test" VAL="1"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello</P> </RULE>> </GRAMMAR>
- Example 2 - All recognise the phrase "hello world".
<GRAMMAR> <DEFINE> <ID NAME="test" VAL="1"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello</P> <P>world</P> </RULE> </GRAMMAR>
<GRAMMAR> <DEFINE> <ID NAME="test" VAL="1"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello world</P> </RULE> </GRAMMAR>
<GRAMMAR> <DEFINE> <ID NAME="test" VAL="1"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello <P>world</P> </P> </RULE> </GRAMMAR>
- Example 3 - Recognises the phrases "hello" and "hello world" (i.e "world" is optional)
<GRAMMAR> <DEFINE> <ID NAME="test" VAL="1"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello</P> <O>world</O> </RULE> </GRAMMAR>
- Example 4 - Recognises "hello one two three"
<GRAMMAR> <DEFINE> <ID NAME="ref44" VAL="1"/> <ID NAME="test" VAL="2"/> </DEFINE> <RULE NAME="test" TOPLEVEL="ACTIVE"> <P>hello</P> <RULEREF NAME="ref44"/> </RULE> <RULE NAME="ref44"> <P>one</P> <P>two</P> <P>three</P> </RULE> </GRAMMAR>
provide more examples
Training
Voice training is performed via the Windows Control Panel - Speech.