VueJs Search Input With SpeechRecognition API
A search input for VueJs and Tailwind that uses the SpeechRecognition API.
I recently needed to incorporate Speech Recognition to an existing search input on a VueJs application. This post outlines how I did that, and gives you a reasonably good starting point to do the same.
What is the SpeechRecognition API?
SpeechRecognition is a Web Speech API interface that allows you to convert speech to text using Javascript in the browser.
This Post picks up from a search input I had already built using VueJs and Tailwind in Laravel JetStream application using IntertiaJS. The article describing how to build this search input can be found here.
The Search Input Component
Here's the entire component where we left off in the previous article:
<template>
<div class="w-1/2 bg-white px-4 dark:bg-gray-800">
<label for="search" class="hidden">Search</label>
<input
id="search"
ref="searchRef"
v-model="search"
class="h-10 w-full cursor-pointer rounded-full border border-gray-500 bg-gray-100 px-4 pb-0 pt-px text-gray-700 outline-none transition focus:border-purple-400"
:class="{ 'transition-border': search }"
autocomplete="off"
name="search"
placeholder="Search"
type="search"
@keyup.esc="search = null"
/>
</div>
</template>
<script setup>
import { ref, watch } from 'vue';
import { Inertia } from '@inertiajs/inertia';
import { debounce } from 'lodash';
const props = defineProps({
routeName: String,
});
let search = ref(null);
let sort = ref(null);
const searchRef = ref(null);
watch(search, () => {
if (search.value) {
searchMethod();
} else {
Inertia.get(route(props.routeName));
}
});
const searchMethod = debounce(() => {
Inertia.get(
route(props.routeName),
{ search: search.value, sort: sort.value },
{ preserveState: false }
);
}, 2000);
</script>
We have a:
- Slightly styled input field.
- Single prop of
routeName
which accepts the route name from the Laravel route above, such as'stories.index'
. - Data property that looks in the inertia page props for a search value.
- Watcher on that
search
data property. - Method that uses lodash
debounce
to only fetch results every 500 milliseconds.
Adding Speech Recognition
let's add a button to enable the listener for the SpeechRecognition API. Just below the input field, add a button:
<button @click="startVoiceRecognition">
Click to start voice recognition!
</button>
And in the script
section, we add the startVoiceRecognition
method we called in the template:
<script setup>
const startVoiceRecognition = () => {
const recognition = new (window.SpeechRecognition ||
window.webkitSpeechRecognition)();
recognition.interimResults = true;
recognition.addEventListener("result", (event) => {
let transcript = Array.from(event.results)
.map((result) => result[0])
.map((result) => result.transcript)
.join("");
if (event.results[0].isFinal) {
this.search = transcript;
}
});
recognition.start();
},
</script>
This method creates a new instance of the SpeechRecognition
object, and sets the interimResults
property to true
. When event.results[0].isFinal
is true
, it sets the search
data property to the transcript
value from the results
event.
This alone should be enough to get the speech recognition working in your search input, but we can do a little more to improve the user experience.
Add start and end event listeners
Since we don't yet have anything telling the user the input is listening, we can add a listening
data property to use for toggling styles on the button.
<script setup>
let search = ref(null);
let sort = ref(null);
const searchRef = ref(null);
const listening = ref(false);
</script>
We will add event listeners for the start
and end
events of the SpeechRecognition API. Add this right before the recognition.start()
method:
<script setup>
const startVoiceRecognition = () => {
const recognition = new (window.SpeechRecognition ||
window.webkitSpeechRecognition)();
recognition.interimResults = true;
recognition.addEventListener("result", (event) => {
let transcript = Array.from(event.results)
.map((result) => result[0])
.map((result) => result.transcript)
.join("");
if (event.results[0].isFinal) {
this.search = transcript;
}
});
// keep the voice active state in sync with the recognition state
recognition.addEventListener("start", () => {
listening.value = true;
});
recognition.addEventListener("end", () => {
listening.value = false;
});
recognition.start();
},
</script>
Now we can use the listening
data property to toggle the input and button's styles:
<input
id="search"
ref="searchRef"
v-model="search"
class="h-8 w-full cursor-pointer rounded-full border border-blue-700 bg-gray-100 px-4 pb-0 pt-px text-gray-700 outline-none transition focus:border-blue-400"
:class="{ 'border-red-500 border-2': listening }"
autocomplete="off"
name="search"
placeholder="Search"
type="search"
@keyup.esc="search = null"
/>
<button @click="startVoiceRecognition"
:class="{
'text-red-500': listening,
'listening': !listening,
}"
>
Click to start voice recognition!
</button>
// this maintains the styles of the button when it's active
<style scoped>
.listening:active {
@apply text-red-500;
}
</style>
Please add your own styles as you see fit. I'm using Tailwind.css classes here. In my application, I also used an SVG inside the button to apply the styles to. This article should just give a basic outline of how to add the styles.
I also have used InertiaJs in my application, so you may have to adjust the searchMethod
method to fit how your application.
Here is the entire component with the SpeechRecognition API added:
<template>
<div class="w-full px-2 bg-transparent flex">
<label for="search" class="hidden">Search</label>
<input
id="search"
ref="searchRef"
v-model="search"
class="h-8 w-full cursor-pointer rounded-full border border-blue-700 bg-gray-100 px-4 pb-0 pt-px text-gray-700 outline-none transition focus:border-blue-400"
:class="{ 'border-red-500 border-2': listening }"
autocomplete="off"
name="search"
placeholder="Search"
type="search"
@keyup.esc="search = null"
/>
<button @click="startVoiceRecognition"
:class="{
'text-red-500': listening,
'listening': !listening,
}"
>
Click to start voice recognition!
</button>
</div>
</template>
<script setup>
import { ref, watch } from "vue";
import { Inertia } from '@inertiajs/inertia';
const props = defineProps({
routeName: {
type: String,
required: true,
},
});
let search = ref();
let listening = ref(false);
let searchRef = ref(null);
watch(search, () => {
if (search.value) {
searchMethod();
} else {
Inertia.get(route(props.routeName));
}
});
const searchMethod = _.debounce(function () {
Inertia.get(
route(props.routeName),
{ search: search.value },
{ preserveState: true }
);
}, 2000);
const startVoiceRecognition = () => {
const recognition = new (window.SpeechRecognition ||
window.webkitSpeechRecognition)();
recognition.interimResults = true;
recognition.addEventListener("result", (event) => {
let transcript = Array.from(event.results)
.map((result) => result[0])
.map((result) => result.transcript)
.join("");
if (event.results[0].isFinal) {
// Split the transcript into words, remove duplicates, and join back together
transcript = [...new Set(transcript.split(" "))].join(" ");
search.value = transcript;
}
});
// keep the voice active state in sync with the recognition state
recognition.addEventListener("start", () => {
voiceActive.value = true;
});
recognition.addEventListener("end", () => {
voiceActive.value = false;
});
recognition.start();
};
</script>
<style scoped>
.listening:active {
@apply text-red-500;
}
</style>
Conclusion
At this point you should have a working search input in your page which listens for speech when the button is clicked, then automatically updates the search input with the spoken words, when speech has been recognized.
I have certainly been enjoying how fast I can code out my ideas using VueJs and Tailwind as a starting point for my applications. I can reuse this component in any of my applications and it will work just fine. I hope you find this useful too.
Happy coding!