Hello from #THATConference! This is my second year attending and I’ve been quite happy. The only thing I miss here is the great people I had the privelege of meeting and seeing within the IBM and friends world, such as the always excellent MWLUG that was a little over a week ago, but my work has shifted away over the last year. I’m looking forward to a couple of ideas in the future though, so who knows what the future will bring.
There’s nothing like a full day at a conference to get your developer juices flowing and a desire to get back to contributing to the blog. I also want to clear out this series to make room for other, related, topics. So here it is! When last we left off, we had covered the basic structure of a serverless function, complete with the needed hooks and handlers for use as an Amazon Alexa Skill.
They’re pure creatures who are generally friendly and happy to please. This is a pretty encouraging thing, especially combined with many beginners needing a little encouragement when they get into software development.
An Expanded Example
The skill is one that retrieves an array of possible encouraging sayings, generally within the vein of dad jokes, which are dog themed. This array could come from anywhere, some backing database connected via RESTful API, GraphQL, or more. So long as the serverless function can connect to it, it’s viable. This example builds from a JSON response hosted on a static site. It’s not terribly dynamic, but it’s updatable and serves the purpose of being available from a static site, to show off some of my amazing punny sayings. You can find that site’s repository at github.com/edm00se/dev-dog, and the live site at edm00se.codes/dev-dog.
Ultimately the skill will allow us to get a random “fact”, this follows the fact skill format, when querying an Alexa device. It will get a random selected string from the array, then speak it. So let’s being.
1. Getting The Data
A Note on Fetch
Why am I using Axios?
TL;DR: There’s no native support of the native browser Fetch API. This means we need to use something else. Initially, I tried using the
node-fetch package for consistency, but that was a little bloated for my needs. Ultimately, I tried
axios packages as well and wound up selecting
axios for its ease of use, popularity, easy to use API, and file size. For more, check out my asciicast comparing sizes of these options.
*note: please excuse the junk formatting, apparently asciicasts don’t enjoy being shoved into an iframe (in AMP 🙄)
2. Registering Handlers
Registering the intent handlers is quite similar to the basic one outlined in the previous post. The largest difference is that the “facts” are being brought in via a handle to a promise, the resulting array has a random member selected, and the response is built out. In this skill, I’m also building out a “card”, which is displayed in the app or on a visual Echo device.
3. Upload and Configure Alexa Skill
The last thing to happen for this to work is a set of build and deploy steps. Specifically:
- building a zip file (since this is larger than the AWS Lambda in-browser editor will allow to directly edit, with the included dependencies)
- uploading the zip file to the AWS Lambda console
- configuring the Alexa Skill in Skill Builder for:
- intent(s) to launch
- certifications (regarding privacy, etc.)
This last topic is a whole bit unto itself, so I’ll follow-up next time with a video walkthrough of how to wire it all up.
I hope this may have sparked some curiosity in getting into both serverless functions and maybe even Alexa Skills. So please feel free to check out the Developer Dog Alexa Skill. It’s basic, but brings in much of what’s at play in a serverless function and what it takes to create an Alexa Skill.
I’ve begun standardizing a couple of the steps for my personal projects, such as incorporating some nifty npm scripts to package and bundle up a zip archive that can be uploaded more easily to the AWS Lambda interface for updating. These will need to be for another day though. Until next time, let’s build better apps!