Windows

HoloLens!!

I had demoed a HoloLens previously while at the //build conference last year, but I hadn’t gotten the opportunity to really tinker with it. You can imagine my excitement then, when the Development Edition HoloLens arrived at the office.

 

So what is a HoloLens?

 

It’s a mixed reality headset combined with power of a computer, no wires, nothing to plug into. Just put it on and go, it’s even got it’s own speakers. HoloLens runs a version of Windows 10, just like any laptop or desktop. I opened the weather app (same one as on my laptop), and pinned it to the wall behind my monitor. Then, I opened up a web browser and loaded my twitter feed over by door. Another app that came pre-installed, allowed to place holograms around the room. Naturally, I put a cat on the floor. The HoloLens is able to map out my environment so that holograms stay where they are placed. I was able to walk around the cat viewing it from all angles.

 

In short, HoloLens is pretty darn cool

 
 

Interaction

 

There are a few methods for interacting with HoloLens.

Gaze is in a lot of ways just like a mouse cursor.  A small circle appears in front of you centered in your vision, just turn your head to move the Gaze.

Voice: With Cortana integration, Voice Commands are also available.

Finger Tap:  Hold out your index finger, and making a tapping gesture without bending your knuckles. This is very similar to a mouse click.

Bloom:  Hold your hand out in front of you, with your fingers together pointing up. Then just simply open your hand.  This gesture is used by the OS to show and hide the start menu.

 

 

HelloLens

 

Time to write my first (albeit simple) app! There are two types of HoloLens apps, 2D apps, and immersive apps. 2D apps appear in a window that can be attached to any flat surface in the environment. These are just regular UWP apps built in visual studio, there are some new APIs available, but otherwise not much is different. Immersive apps, which is what I’m interested in, are built using Unity, and not constrained to a window. Luckily, a personal license to Unity is free, which is enough for me to get started building a HoloLens app.

The HoloLens Academy developer site has several in-depth tutorials, I choose to start with the first one, Holograms 100.  Following through the tutorial, I created a new project in Unity, set up the camera, configured the project to target a holographic view rather than a 2D display. Next I created a white cube, placed 0.25 meters in front of the camera.  My test project was now ready to build. Deploying to the device was as simple as going into the Settings on the HoloLens to enable Developer Mode and pairing the HoloLens with my laptop.

Tada! I had a white cube floating in front of me.

 

 

Next Steps

 

One of the recommended HoloLens tools in the documentation is for a SDK called Vuforia,

“Vuforia enables you to create holographic apps that can recognize specific things in the environment and attach experiences to them.”

There is potential to build some really cool things with that. ūü§Ē

Ding Dong! The IoT doorbell

 

Internetting the Things

 

I was looking for a reason to experiment with the Internet of Things, and we happened to have a few Raspberry Pi 2’s around the office. . . and our office has a nice reception / waiting area but no doorbell…

..so why not build a connected doorbell using Windows 10 IoT running on a Raspberry Pi?

Installing Windows on a Raspberry Pi ended up being pretty straight forward, after installing the Windows 10 IoT Core Dashboard, a simple setup wizard allowed me to download the download the IoT Core image, and flash it to the Raspberry Pi’s MicroSD card. ¬†Next I connected the Pi to the office’s ethernet, then connected to it’s admin page remotely through my web browser.

A neat thing I learned about Windows 10 IoT is that my Pi could run headless (no display), but I could still connect a display if I wanted to.  The Raspberry Pi could be configured to boot automatically into my Doorbell app each time it was turned on.

 
 

Building a button

 

My doorbell wouldn’t be very useful if it couldn’t be pressed though. I ordered a breadboard kit online, which included a variety of bits and pieces. I found a Push Button sample¬†project¬†on the Windows 10 IoT site, which would be the starting point for my doorbell codebase.

Breadboard and bits
Breadboard and bits

… and just because I thought it would be cool, I also connected an LED to the breadboard, which I would blink when the button was pressed.¬†¬Į\_(„ÉĄ)_/¬Į

After wiring the button and LED to the GPIO pins on the Raspberry Pi, the pins would also have to be configured in the code. For my doorbell, the button is connected to pin 5, and the LED is connected to pin 6.

		private const int LED_PIN = 6;
		private const int BUTTON_PIN = 5;
		private GpioPin ledPin;
		private GpioPin buttonPin;

		private void InitGPIO()
		{
			var gpio = GpioController.GetDefault();

			// Show an error if there is no GPIO controller
			if (gpio == null)
			{
				return;
			}

			buttonPin = gpio.OpenPin(BUTTON_PIN);
			ledPin = gpio.OpenPin(LED_PIN);

			// Initialize LED to the OFF state by first writing a HIGH value
			// We write HIGH because the LED is wired in a active LOW configuration
			ledPin.Write(GpioPinValue.High);
			ledPin.SetDriveMode(GpioPinDriveMode.Output);

			// Check if input pull-up resistors are supported
			if (buttonPin.IsDriveModeSupported(GpioPinDriveMode.InputPullUp))
				buttonPin.SetDriveMode(GpioPinDriveMode.InputPullUp);
			else
				buttonPin.SetDriveMode(GpioPinDriveMode.Input);

			// Set a debounce timeout to filter out switch bounce noise from a button press
			buttonPin.DebounceTimeout = TimeSpan.FromMilliseconds(50);

			// Register for the ValueChanged event so our buttonPin_ValueChanged 
			// function is called when the button is pressed
			buttonPin.ValueChanged += buttonPin_ValueChanged;

			ringButton.Visibility = Visibility.Collapsed;
		}

 
 

Slack integration

 

Our team uses  Slack, which is an awesome communication tool that allows integration to a large amount of third party services, and features.  To complete the doorbell, I would create my own custom Webhook, which my doorbell will send data to.

Incoming Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload, which includes the message and a few other optional details.

I configured my integration to post to a specific #visitors channel. The Slack notification would use¬†the @here keyword so that only those team members who were currently only would get notified. Besides, odds are that if you aren’t online, you probably aren’t in a position to answer the door anyway.

			// Post Slack Message
			HttpClient client = new HttpClient();
			String uriString = "<insert slack webhook here>";
			var cameraUrl = "<insert URL to camera>";
			var json = "{\"text\": \"Ding Dong <!here>! Somebody is at the Front Door. <"+cameraUrl+"|Click here to see who it is.>\"}";
			var body = new HttpStringContent(json);
			
			var response = await client.PostAsync(new Uri(uriString), body);

 

Early version of doorbell in action
Early version of doorbell in action

 
 

A fancier button

 

My doorbell was functional, but not very user friendly in this state. Luckily the office also has a 3D Printer.  One of the designers helped me with the modeling, and we 3D printed a box, and button cover that would attach to the push button connected to my Raspberry Pi.

Next, we connected the Raspberry Pi to a monitor, and flipped it around facing the door.  The doorbell was now ready for action!

 

Doorbell deployed with new 3D printed button
Doorbell deployed with new 3D printed button

 

The Source Code for my Slack Doorbell is available on GitHub.

I, for one, welcome our new bot overlords

Here come the bots!

2016-04-25_12-07-14

One of the more interesting announcements from the recent Microsoft //build conference, was the introduction of the Bot Framework. Demoed during the keynote, was a Domino’s Pizza Bot that allowed the speaker to order a pizza, using natural language, from on stage. Seemed pretty neat, so I decided then, that I wanted to create a bot myself. I thought that I would¬†create a bot that would hook up to that game I can’t seem to stop playing. Luckily Destiny even has a public API, that makes it possible to pull various stats and activities about the game with some minor hoops to jump through.

 

Getting Started

 

I decided I would build my bot using Visual Studio 2015, so the first step was to Download and install the new Bot Application template into my Visual Studio templates folder. After which, I could create a new project, and have a fully functional Echo Bot starting point. A user sends a message to the bot, and the bot simply echos it back. There are a handful of system messages the bot can receive as well such as “BotAddedToConversation” or “UserAddedToConversation”. Since my bot is going to just be used to return information from the Destiny API, these are going to go unhandled.

After verifying the bot was working with the bot emulator I deployed it to an Azure backend, and registered the bot on the dev.botframework.com portal. A full guide on getting a bot started in .NET can be found here on the botframework website.

 

Understanding Language

 

One of the most interesting parts of the Domino’s Pizza Bot demo, was the natural language used to communicate with the bot. The pizza bot is¬†able to parse out various properties from a phrase such as “send me a large pizza with pepperoni”, and know that the size is “large” and the topping is “pepperoni”. Sadly, the technology demonstrated in the keynote is not currently available, but I did come across a similar service in LUIS (Language Understanding Intelligence Service). Although not as fancy of an interface, LUIS would allow me to registered “intents” and detect “entities” within messages sent to my bot.

After registering an account on luis.ai and creating a new application, I started by making some Entities to hold detected values.

  • Statistic: This would be used to denote which piece of information to poll from a player’s Destiny stats. (such as weapon specific kills, or activities completed).
  • Activity: Player stats on¬†Destiny activities¬†are separated into PvE (Player versus Environment) or PvP (Player versus Player), this entity is going to be used to refine my dataset being returned when requesting player stats.
  • Vendor: I had planned on allowing my bot to poll the specific inventories of any of the vendors in the Tower.
  • PlayerPair:¬†I wanted my bot to be able to compare the stats of different different players, so I created this Entity with two Children: “Player1” and “Player2”.
  • PlayerEntity:¬†I am using this entity to capture the Console the player is on, and associated PSN ID or Gamertag. I later realized that I could do a player search without knowing the console.

 

With the Entities created, I could start creating Intents. At a basic level, an Intent is the output of sending input to LUIS. A message is evaluated, and the Intent that best matches, is returned to my bot, along with any recognized Entities.

I started by creating a new¬†Intent named “Stats” that I would use to return Destiny stats about a specific player denoted by a recognized Gamertag. It has 2 required¬†parameters, “Statistic” and “Gamertag”, with one optional parameter, “Activity”. When parameters are required, the Intent will not be recognized as a valid result unless all the required parameters are recognized as well.

 

Adding a new Intent named Stats

 

After creating an Intent, LUIS must be trained on how to recognize this. An utterance is an sample input that might be provided to the service. After entering an utterance and picking the appropriate Intent, you can select text to denote what the Entities contained in the utterance should be. Through adding a variety of utterances, and appropriating tagging them, LUIS will start to gain more confidence in its recognition.

 

Adding a new utterance
Adding a new utterance

 

In the end, I registered 12 Intents and entered over 100 different utterances. Now that my bot was registered on the botframework site, hosted on Azure, and registered with LUIS, I was finally ready to coding.

 

Building the Destiny Ghost Bot

 

Hooking up my LUIS application to my bot’s code, was surprisingly straight forward, thanks to the botbuilder SDK. Where previously my bot would receive a message, and echo back a response, now my bot will receive a message, and return back a LuisDialog object

	[BotAuthentication]
	public class MessagesController : ApiController
	{
		/// <summary>
		/// POST: api/Messages
		/// Receive a message from a user and reply to it
		/// </summary>
		public async Task<Message> Post([FromBody]Message message)
		{
			if (message.Type == "Message")
			{
				// return our reply to the user
				return await Conversation.SendAsync(message, () => new DestinyDialog());
			}
			else
			{
				return HandleSystemMessage(message);
			}
		}

The important thing to note is that the DestinyDialog class has a LuisModel attribute containing both the LUIS App ID, and a Subscription Key from Azure. Next several methods are defined, one for each Intent with a LuisIntent attribute matching it’s name as created on LUIS.

	[LuisModel("APP_ID", "SUBSCRIPTION_KEY")]
	[Serializable]
	public class DestinyDialog : LuisDialog<object>
	{
		// Get Destiny API key from https://www.bungie.net/en/User/API
		private readonly string apiKey = "API_KEY";

		public DestinyDialog(ILuisService service = null)
			: base(service)
		{
		}

		[LuisIntent("")]
		public async Task None(IDialogContext context, LuisResult result)
		{
			context.Wait(MessageReceived);
		}

		[LuisIntent("Stats")]
		public async Task GetStatForGamertag(IDialogContext context, LuisResult result)
		{
			EntityRecommendation statistic;
			if (!result.TryFindEntity("Statistic", out statistic))
			{
				// statistic is a required Entity, this should never happen
			}

Retrieving the recognized Entities is then as simple as calling TryFindEntity on the passed in LuisResult. Then I take the parameters, call into the Destiny API, and send a response back to the user. Suddenly, my bot became a whole lot more interesting.

 

Now we're talkin'
Now we’re talkin’

 

Conclusion

 

The Microsoft Bot Framework allow me to get a bot up and running very quickly, I was able to easily add natural language understanding. Not only that, but my bot is accessible across several different channels such as Skype, Slack, GroupMe and more. The bots are coming, and I for one, welcome our new bot overlords.
 
Try interacting with the bot yourself with the embeded web interface (one of the botframework channels) below
 

The Source Code for my Destiny Ghost Bot is available on GitHub.