Skip to content
This repository has been archived by the owner on Aug 15, 2023. It is now read-only.

Question about wav content #4

Open
emphasize opened this issue Jul 15, 2020 · 8 comments
Open

Question about wav content #4

emphasize opened this issue Jul 15, 2020 · 8 comments

Comments

@emphasize
Copy link

Hi el-tocino,

I'm struggling a bit to find a german dataset to speed up the process of finding fake words.

There are some sets, but almost exclusively spoken sentences (half-sentences). Some are short, but i'm not certain that this even qualifies to be training material. Is precise-train-incremental restricted to spoken words?

@el-tocino
Copy link
Owner

You can train precise for recognizing sneezes, actually, if so inclined.

Using sox you can trim longer clips down, based on silence between words. Aim for 3s or less per clip. Then dump them in the nww folders as appropriate. It's still better to try and use false activation words and noises where possible. Random speech will help to an extent, but you also want to fine-tune this to be as accurate to both the wake word and discerning against not-wake-word as possible.

@emphasize
Copy link
Author

Thanks, that's not meant to be a supplement. More an addition to the word finder methods you suggest.

Mozillas common voice dataset is an exceptable source then. Sadly not words, but short sentences with 6 or less words. And a hefty amount of data that's at least somewhat "peer-reviewed".

Do you recommend some ambient sound sources besides the tuxfamily.org suggestion?

--
Short additional question: What is meant by the batch -b option flag of precise-train?

Cheers,
Swen

@el-tocino
Copy link
Owner

Precise community data has a not wake word section including some noises. The google speech commands dataset is an ideal addition to not wake words (though it's large, and will significantly increase training time). Recording ambient noise is pretty easy with a cell phone as well.

Batch size is useful for making a wider pass of data for each epoch, I tend to use pretty large sizes (5000?), some experimentation would be useful.

Latest Common Voice now has a large subset of single word entries.

@emphasize
Copy link
Author

Google Research has a lot of different language datasets (Nepali, who would guess that), but unfortunatly no german one. Or do you suggesting that languages itself play a lesser role?
GSC v.2 is already downloaded, but then i realized: there's not much spoken english around here ;)

I think i will train them in a Raspbian VirtualMachine, if that's possible. Or turn to Windows completely for that process. My pi buddies already sweatin'.

@el-tocino
Copy link
Owner

The language isn't as important as phonemes and pattern of words.

I'd train on a desktop rather than a pi with that volume of data. ;)

@emphasize
Copy link
Author

emphasize commented Jul 17, 2020

After reviewing the common voice dataset more closely i think i'm pressed to trim down parts

based on silence between words

Do you mind sharing some useful sox commands?

Cheers

@emphasize
Copy link
Author

emphasize commented Jul 17, 2020

I have a proposition myself.

https://d-rhyme.de/worte-verdrehen/

In general it's more for our german audience, but this particular section "twists words" in a way that the middle part of the name/word is replaced by random syllable(s?)/letters (word length is constant) - and therefor language agnostic

Let's say the wake word is "Samira". he spits out Salisa, Savita, Saliga, Sakita, ...

In my understanding that should be a great addition to the wordfinder/rhyme methods given by your howto.

@el-tocino
Copy link
Owner

Try it and see?

Google sox silence, i don't have it handy and it'll explain the parameters better.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants