= Voice Import Tools Tutorial : How to build a new Voice with Voice Import Tools = This Tutorial explains the procedure to build a new voice with Voice Import Tools (VIT) under MARY Environment. Voice Import Tool is a Graphical User Interface(GUI), which contains a set of Voice Import Components and helps the user to build new voices under MARY(Modular Architecture for Research in speech sYnthesis) Environment. This GUI Tool designing is primarily aims to build new voices very easily by any user with out knowing much technical details of Speech Synthesis. Currently, Voice Import Tool supports following categories mainly: 1. Feature Extraction from Acoustic Data 2. Feature Vector Extraction from Text Data 3. Automatic Labeling 4. Unit Selection 5. Voice Installation to MARY == Requirements Needed: == * Operating System - Linux (Recommended) * MARY TTS Recent Version - Download Link: http://mary.dfki.de/Download * Openmary - SVN from http://mary.opendfki.de (we also able to use Windows also, if we can able to compile properly the following dependent tools.) == Dependendent Tools: == - Praat Pitch Marker or Snack - For pitch marks Download Link for praat : http://www.fon.hum.uva.nl/praat - Edinburgh Speech Tools Library – For MFCCs and Wagon (CART) Download Link for Speech Tools: http://www.cstr.ed.ac.uk/projects/speech_tools/ - EHMM or Sphinx – For Automatic Labeling EHMM is available with festvox-2.1 (Recent Version) - http://festvox.org/download.html Sphinx - http://cmusphinx.sourceforge.net/webpage/html/download.php == Voice Import Components: == Following Components are available with Voice Import Components: - !PraatPitchmarker - !SnackPitchmarker - MCEPMaker - Festvox2MaryTranscripts - Mary2FestvoxTranscripts - !PhoneUnitFeatureComputer - !HalfPhoneUnitFeatureComputer - EHMMLabeler - !SphinxLabelingPreparator - !SphinxTrainer - !SphinxLabeler - MRPALabelConverter - !HalfPhoneUnitfileWriter - !HalfPhoneFeatureFileWriter - !JoinCostFileMaker - !AcousticFeatureFileWriter - CARTBuilder - CARTPruner - !VoiceInstaller == How to run? == 1. First you need to have following 2 basic requirements for Voice Building a. Wave files b. Corresponding Transcription (in MARY or Festival Format) 2. Create a new Voice Building Directory - Put all Wave files in "wav" directory 3. Run below commands through Shell script from Voice Building Directory. {{{ export MARY_BASE="/path/to/mary" java -Xmx1024m -classpath $MARY_BASE/java:$MARY_BASE/java/mary-common.jar: \ $MARY_BASE/java/signalproc.jar:$MARY_BASE/java/freetts.jar:$MARY_BASE/java/jsresources.jar: \ $MARY_BASE/java/log4j-1.2.8.jar -Djava.endorsed.dirs=$MARYBASE/lib/endorsed \ de.dfki.lt.mary.unitselection.voiceimport.DatabaseImportMain }}} GUI is looking like below (Which supports voice building): {{{ #!html
}}} When you are running first time above shell script, It asks you some basic configuration settings by presenting with a GUI window where you have to enter a few basic settings. Almost all other settings are based on these first settings and set automatically. After clicking the "Save"-button, you get to the main window. There you can see a list of modules. A component is executed by ticking the associated checkbox and clicking on "Run". Global Configuration Settings window looks like below: {{{ #!html
}}} '''Global Configuration Settings:''' Domain - general or limited[[BR]] Gender - male or female[[BR]] Locale - which specifies language of domain (de - Deutsch or en - English) [[BR]] (Currently, MARY supporting 2 language only: 1. Deutsch 2. English)[[BR]] Marybase - MARY Installation Directory (Global Path)[[BR]] Rootdir - Voice Building Directory (Global Path)[[BR]] Wavdir - Where we can store Wave files [[BR]] Textdir - Where we can store corresponding Transcriptions [[BR]] User also can change the settings for each individual component by clicking on the wrench symbol next to the component. Clicking on "Settings" takes you to the window where you can change the basic settings. In a settings window, you can change the view to the settings of another module or the basic settings via the drop-down menu. Basically, all modules need to be run to import the voice into MARY. For more detailed information, check the general help file - just click on "Help" in the main window. Clicking on help in the settings window opens a help window with details about the displayed settings. We recommended to give Absolute Paths for individual Configuration Settings. These config. settings are arguments to components to perform corresponding task. The import tool creates two files in the directory where you started it - database.config and importMain.config. database.config contains the values of the settings - you can change the settings also in this file, but be aware that this may cause problems. Simplest way of Using Voice Import Components: - Give Config. Settings for Each and Every Component. - Tick mark all components - Click RUN button [[BR]] It can complete all tasks in sequential manner. [[BR]] But No need to use all components for Building a New Voice. For Example: For Automatic Labeling we can choose EHMM or Sphinx. [[BR]] == Explanation on Individual Voice Import Components == == 1. Feature Extraction from Acoustic Data == '''!PraatPitchmarker'''[[BR]] It computes pitch markers with help of Praat. You need to compile or install Praat in your machine.[[BR]] It also do corrections for Pitch Marks to align near by Zero Crossing. Configuration Settings: * command - Give Absolute path of Praat Executable * pmDir - Output Dir Path for Praat Pitch marks * corrPmDir - Output Dir Path for corrected pitch marks (Pitch marks tuned towards Zero Crossing) * maxPitch, minPitch - For choosing Pitch Range (Ex: Male: 50-200 | Female: 150-300) '''MCEPMaker'''[[BR]] It calculate MFCCs from Speech Wave files, using Edinburgh Speech Tools. Configuration Settings: * estDir - Edinburgh Speech Tools Compiled Directory * pmDir - Praat Pitch marks Directory * corrPmDir - Corrected Pitch marks Directory * mcepDir - Output Dir for MFCCs == 2. Support for Transcription Conversion == '''Festvox2MaryTranscripts''' [[BR]] This Component supports user to convert Festvox Transcription format (ex: txt.done.data) to MARY Supportable format. MARY contains individual text files for each wave file. All Voice Import Components use Transcription from MARY Format. So This component is very useful, if user have Transcription in Festvox format. Configuration Settings: * transcriptFile - Festvox format transcription file (Absolute path) '''Mary2FestvoxTranscripts'''[[BR]] It supports user to convert MARY Supportable format to Festvox format Transcription. It does reciprocal process to above component. Configuration Settings: * transcriptFile - Output Festvox format transcription file (Absolute path) == 3. Feature Vector Extraction from Text Data == '''!PhoneUnitFeatureComputer'''[[BR]] !PhoneUnitFeatureComputer computes Phone feature vectors for Unit Selection Voice building process. [[BR]] * Note: This module requires a running Maryserver from MARY Installation. [[BR]] You can connect to a different server by altering the settings. See the settings help for more information on this. What type of features computed is depends on configuration file called "targetfeatures.config". This configuration file is in Marybase/conf/ directory and directs Server to compute feature vectors. Configuration Settings: * featureDir - Output Directory to place computed Phone feature vectors (Absolute path) * maryServerHost - Server Name * maryServerPort - Socket Port number (Default 59125) '''!HalfPhoneUnitFeatureComputer''' This component also same as above component. But It computes Half phone level feature vectors. Here "halfphone-targetfeatures.config" file, which is in Marybase/conf/ directory directs Server to compute Half-Phone level feature vectors. Configuration Settings: * featureDir - Output Directory to place computed Half-Phone feature vectors (Absolute path) * maryServerHost - Server Name * maryServerPort - Socket Port number (Default 59125) == 4. Automatic Labeling == '''EHMMLabeler'''[[BR]] EHMM Labeler is a labeling tool, which generates label files with help of Wave files and corresponding Transcriptions. EHMM basic tool is available with Festvox Recent Version. For running EHMM Labeler under MARY environment you need to compile EHMM tool in your machine. It may take long time depending on the size of the data and system configuration. EHMMLabeler Supports: 1. Database labeling with Force alignment by Training with Flat-Start Initialization 2. Database Labeling with Force alignment by Training with initialized models (Re-Training) 3. Database Labeling with Force alignment by already existed models (Decoding only) Configuration Settings: * ehmmDir - EHMM basic package compilation Directory. * eDir - Directory name (Absolute path) to copy Transcription (in ehmm Supported format) and to store ehmm model. * featureDir - Feature vectors Directory path, where phone features vectors were computed. (To get phone sequence) * startEhmmModelDir - Already existing EHMM model Directory path to Initialize EHMM models (for Re-training or Decoding) * reTrainFlag - (true | false) true - Do re-training by initializing with given models. false - Do just Decoding * outputLabDir - Dir. Path to store generated Labels '''Automatic Labeling using Sphinx Tools:''' !SphinxLabelingPreparator, !SphinxTrainer and !SphinxLabeler Components used to do Automatic Labeling with Sphinx tools. These 3 components need !SphinxTrain, Sphinx Decoder and Edinburgh Speech Tools for training models and Force alignment. '''!SphinxLabelingPreparator'''[[BR]] This Component prepares the required setup needed for !SphinxTrain to train Models. [[BR]] Configuration Settings: * estDir - Edinburgh Speech Tools Compiled Directory * maryServerHost - Server Name * maryServerPort - Socket Port number (Default 59125) * sphinxTrainDir - !SphinxTrain installation Directory * stDir - Directory name (Absolute path) to copy Dictionaries and Temp. files (in Sphinx Supported format). * transcriptFile - Festvox format transcription file (Absolute path) '''!SphinxTrainer'''[[BR]] It trains models required for labeling using Sphinxtrain. It may take long time depending on the size of the data and system configuration. Configuration Settings: * stDir - Absolute path of directory where all Dictionaries and Temp. files stored by !SphinxLabelingPreparator. '''!SphinxLabeler'''[[BR]] It produces labels with the help of the models built by the !SphinxTrainer. It uses Sphinx-2 Decoder for force alignment. Configuration Settings: * sphinx2Dir - Sphinx-2 Installation directory absolute path. * stDir - Absolute path of directory where all Dictionaries, Temp. files and models stored by !SphinxLabelingPreparator and !SphinxTrainer. '''MRPALabelConverter'''[[BR]] If you have labeled data in the Festvox format and using the MRPA-Phoneset, use this module to convert the phones into the phoneset used by Mary. Configuration Settings: * mrpaLabDir - MRPA Label file directory == 5. Label or Pause Correction and Label-Feature Alignment == '''!LabelledFilesInspector'''[[BR]] It allows user to browse through aligned labels and listen to the corresponding wave file. It is useful for perceptual manual verification on alignment. Configuration Settings: * corrPmDir - Directory Path for corrected pitch marks. '''!PhoneUnitLabelComputer''' and '''!HalfPhoneUnitLabelComputer'''[[BR]] These components converts the label files into the label files used by Mary. !PhoneUnitLabelComputer produces phone labels, !HalfPhoneUnitLabelComputer produces halfphone labels. User need both to build the voice. Configuration Settings: * labelDir - Output phone label dir. path for !PhoneUnitLabelComputer. Output Half phone label dir. path !HalfPhoneUnitLabelComputer. '''!PhoneLabelFeatureAligner'''[[BR]] It tries to align the labels and the feature vectors. If alignment fails, you can start the automatic pause correction.[[BR]] This works as follows: - pauses, that are in the label file but not in the feature file are deleted in the label file, and the durations of the previous and next labels are stretched. - pauses that are in the feature file but not in the label file are inserted into the label file with length zero. If there are still errors after the pause correction, you are prompted for each error. You can skip the error or remove the corresponding file from the basename list (the list of files that are used for your voice). "skip all" and "remove all" does this for all problematic files. "Edit unit labels" allows you to edit the label file. "Edit RAWMARYXML" lets you edit the maryxml that is the input for computing the features. You have to have a Maryserver running in order to recompute the features from the maryxml. You can alter the host and port settings for the server by altering the settings for the !UnitFeatureComputer. Configaration Settings: * featureDir - Phone feature vectors directory * labDir - Phone Labels directory '''!HalfPhoneLabelFeatureAligner'''[[BR]] It also works same as !PhoneLabelFeatureAligner, but it works for halfphone units case. Configuration Settings: * featureDir - Half Phone feature vectors directory * labDir - Half Phone Labels directory == 6. Basic Data Files == Following components will create basic binary files, which contain whole voice database. So that it is easier and faster to access Database. These files are needed for various voice building steps and for synthesis. '''!WaveTimelineMaker'''[[BR]] The !WaveTimelineMaker split the waveforms as datagrams to be stored in a timeline in Mary format. It produces a binary file, which contains all wave files. Configuration Settings: * corrPmDir - Directory Path for corrected pitch marks. * !WaveTimeline - file containing all wave files. Will be created by this module '''!BasenameTimelineMaker''' The !BasenameTimelineMaker takes a database root directory and a list of basenames, and associates the basenames with absolute times in a timeline in Mary format. Configuration Settings: * pmDir - Directory containing the pitchmarks * timelineFile - file containing the list of files and their times, which will be created by this module. '''MCepTimelineMaker''' The MCepTimelineMaker takes a database root directory and a list of basenames, and converts the related wav files into a mcep timeline in Mary format. Configuration Settings: * mcepDir - directory containing the mcep files * mcepTimeline - file containing all mcep files. Will be created by this module == 7. Building acoustic models == '''!PhoneUnitfileWriter'''[[BR]] It produces a file containing all phone sized units. Configuration Settings: * corrPmDir - Directory containing the corrected pitchmarks * labelDir - Directory containing the phone labels * unitFile - File containing all phone units. Will be created by this module '''!PhoneFeatureFileWriter'''[[BR]] It produces a file containing all the target cost features for the phone sized units. The module needs a file defining which features are to be used and what weights are given to them. They must be the same features as the ones that the !PhoneFeatureComputer used. If you do not have a feature definition, the module tries to create one. For more information, see the example file: Marybase/lib/modules/import/examples/PhoneUnitFeatureDefinition.txt Configuration Settings: * featureDir - directory containing the phone features * featureFile - file containing all phone units and their target cost features.Will be created by this module * unitFile - file containing all phone units * weightsFile - file containing the list of phone target cost features, their values and weights '''DurationCARTTrainer'''[[BR]] It builds an acoustic model of durations in the database using the program "wagon" from the Edinburgh Speech tools. Configuration Settings: * durTree - file containing the duration CART. Will be created by this module * estDir - directory containing the local installation of the Edinburgh Speech Tools * featureDir - directory containing the phonefeatures * featureFile - file containing all phone units and their target cost features * labelDir - directory containing the phone labels * stepwiseTraining - "false" or "true" * unitFile - file containing all phone units * waveTimeline - file containing all wave files '''F0CARTTrainer'''[[BR]] It builds acoustic models of F0 like DurationCARTTrainer. It uses "wagon" and the files produced by !PhoneUnitfileWriter and !PhoneFeatureFileWriter. Configuration Settings: * estDir - directory containing the local installation of the Edinburgh Speech Tools * f0LeftTreeFile - file containing the left f0 CART. Will be created by this module * f0MidTreeFile - file containing the middle f0 CART. Will be created by this module * f0RightTreeFile - file containing the right f0 CART. Will be created by this module * featureDir - directory containing the phonefeatures * featureFile - file containing all phone units and their target cost features * labelDir - directory containing the phone label files * stepwiseTraining - "false" or "true" * unitFile - file containing all phone units * waveTimeline - file containing all wave files ( '''Under Construction''' - to continued)