Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Make Your ASP.NET Applications Talk with Text-to-Speech : Page 3

Silence may be golden, but increasingly, applications, appliances, and other automated systems are acquiring the ability to speak. You can take advantage of text-to-speech technology to voice-enable your .NET applications.

Speech Synthesis Markup Language
The text-to-speech engine makes certain default assumptions regarding how to speak text referenced by the InlineContent property, but developers can control the way the text-to-speech engine renders the audio by using Speech Synthesis Markup Language (SSML) elements. SSML is an XML-based markup language based on recommendations made by the World Wide Web (W3C) Consortium. Table 1 lists the SSML elements that are supported by the SASDK.

Table 1. Supported SSML Elements: The table lists the SSML elements supported by the SASDK and used to control the way text is rendered by the text-to-speech engine.
SSML Element Description
ssml:paragraph/ssml:sentence Used to separate text into sentences and paragraphs.
ssml:say-as Used to specify the way text is spoken. It accepts several different attributes to identify the type of text.
ssml:phoneme Used to control the way a word is pronounced.
ssml:sub Used to specify a substitute word or phrase in place of the specified text.
ssml:emphasis Used to increase the emphasis placed on a word or phrase.
ssml:break Used to add pauses between certain words in a text.
ssml:prosody Used to control the pitch, rate, and volume of the text.
ssml:audio Used to insert recorded audio files.
ssml:mark Used to insert a mark at a certain point in the text. This mark can then be used to signify an event or to trigger an action.

The sample application illustrates the say-as and prosody SSML elements in action. Each button on the Default.aspx page corresponds to a prompt control. These prompt controls include an ssml:say-as or an ssml:prosody element within the InlineContent element. Here's an example of the HTML markup for one of these elements:

<speech:prompt id="prmSayAsAcronym" runat="server"> <InlineContent> <ssml:say-as type="acronym"> <speech:Value runat="server" TargetElement="txtText" TargetAttribute="value"></speech:Value> </ssml:say-as> </InlineContent> </speech:prompt>

Prompts start when the user clicks one of the buttons, which executes JavaScript such as the following:

function SayAsAcronym() { prmSayAsAcronym.Start(); }

In the example above, the prompt named prmSayAsAcronym includes the ssml:say-as element, which specifies that any text contained within the txtText input element should be spoken as an acronym. So, if you were to type "ASP" into the text element and click "Say As Acronym," the text to speech engine would read each individual letter.

To experiment with the sample application, take some time to enter different snippets of text and then click each of the buttons to see how the text-to-speech engine interprets the text. I urge you to change the element values and experiment with the way each control is rendered. The SASDK offers developers a fine level of control over how the text-to-speech engine renders text, so experimentation can result in a more natural sounding speech-based application.

Sara Morgan Rea is a 2007 Microsoft MVP for Office Communications Server. Her first book, Building Intelligent .NET Applications, was published in 2005. In addition to co-authoring several Microsoft Training Kits, she recently published Programming Microsoft Robotics Studio. She currently works as a robotic software engineer at CoroWare.com.
Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.