It is pretty amazing to see this new wave of devices that can handle voice and lets you do actions by just telling the activation word and then your command. It is pretty amazing that we can do that now, no doubt.
And a lot of companies are fighting to get devices out to the customers and a lot of companies are fighting to get their "voice apps" out onto the devices.
My biggest question in this right now is, if this is such a good idea, why can't we voice control those websites that have an alexa skill right now? What is stopping them from making a website voice enabled?
Why can't I in the spotify app say; "play the latest album of Eminem", because I can do it right now with my Google Home mini and if I had an Alexa.
Why can't I on Netflix.com say: "watch Back to the Future", because again I can do that on the Apple TV, Google Home Mini with a Chromecast.
Why do these companies go through the trouble of making a skill, submitting it through the right channels, testing that it works and investing into an ecosystem that is not their own?
Of course, they should make the skill available for all the voice assistants, but why not make it in an environment you control on your own? All the apps would also be able to control the privacy of the users.
We don't need another device to talk to our computer, we already have a computer that can do all the things and more, why not use that first? And why exclude everyone that haven't bought it or maybe can't afford it?
It would even be better to do it from a computer than a Voice Assisstant, you would have a screen that could give you immediate feedback, you would have an alternative input (keyboard, touchpad, touchscreen) and it would be a better way of learning how to interact with a voice assistant instead of today where you are guessing what to do, or gets it wrong.
I was pretty excited when Siri came to the iPhone, it seemed like a fantastic outlook of what we would be able to do with our phones, but luckily we didn't hold our breath until it became useful. Siri has never been that useful and I guess many have not done anything with voice because they thought, "Apple has nearly solved it with Siri, it will be awesome soon, so it makes no sense to do it ourselves".
If voice is interesting, do it on your own platform first, see what works on a platform with no limitations like the web and then makes "skills" and actions for other platforms.
Previous post: How to handle LitElement input onChange
Newer post: How to sort Kubernetes kubectl get results