Jonathan Streater

Software | Data | UX

  • Projects
  • Resumé
  • About

SoundSketch

March 31, 2016 by Jonathan Streater

SoundSketch is a new kind of musical instrument and playground.  You use pen and paper to draw instruments and then play them by touching them with your fingers or by arranging them in space.  

It connects the physical world of ink, paper, and fingers to the digital world of electronically synthesized and controlled sound.  And it connects the expressiveness of the visual world to the expressiveness of the audial world.  Want to draw a drum set and play it?  Want to build a monsters face and listen to it?  Want to draw anything you can imagine and see what happens?  Go ahead.  Grab a marker, draw something, and use your fingers to tap, fiddle, strum, or whatever.

The PR video for the third version of SoundSketch on which I played the role of Concept Lead.

SoundSketch started in 2012 and I've worked on 2 newer versions since.  I made the first version in school when I found a bit of random yarn and a discarded tripod in the corner of my research lab. I used them to hang my Kinect above a table.

Though there are some evolutions across SoundSketch versions, especially in the amount of money spent on each, the interaction design, concept, and algorithms used are essentially the same across all three.  I developed version 2 working with one other engineer during my first month at Wieden+Kennedy. And for version 3 I took the role of Concept Lead and worked with a team of 5 including 2 engineers, 2 designers, and a woodworker.  

Me explaining my very first version of soundsketch.  I created it for a User Interface Prototyping class at Georgia Tech.  This video gets into more of the tech and interaction mechanic details and shows the system being interacted with live.

The idea came really just from me wanting to use my kinect for a novel user interface project.  I love music and so I couldn't really think of anything else I'd rather make an interface for.  My original and favorite idea was, instead of using pen and paper, to just use all of the objects in your environment as the musical interface.  Perhaps you could use the cushions of your couch as drums.  And the coasters on the coffee table as piano keys.  And so on.  Unfortunately, that's a really hard computer vision problem to solve and, at the time, out of my depth.

So, I arrived at drawing with black markers on white paper.  

 Demoing version 2 in Wieden+Kennedy's atrium.

Demoing version 2 in Wieden+Kennedy's atrium.

 Demoing version 2 in Wieden+Kennedy's atrium.

Demoing version 2 in Wieden+Kennedy's atrium.

 Demoing version 2 in Wieden+Kennedy's atrium.

Demoing version 2 in Wieden+Kennedy's atrium.

 Things people drew while demoing version 2 in Wieden+Kennedy's atrium.

Things people drew while demoing version 2 in Wieden+Kennedy's atrium.

 Me with version 2 at an art tech show in Portland

Me with version 2 at an art tech show in Portland

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 People interacting with version 2 at an art tech show in Portland.

People interacting with version 2 at an art tech show in Portland.

 Behind the technology view of people interacting with version 2 at an art tech show in Portland.

Behind the technology view of people interacting with version 2 at an art tech show in Portland.

 Demoing version 2 along with new sound design at the beginning of starting work on version 3

Demoing version 2 along with new sound design at the beginning of starting work on version 3

 Architectural diagram of the new setup for version 3

Architectural diagram of the new setup for version 3

 Graphic design for version 3

Graphic design for version 3

 Construction of the setup for version 3

Construction of the setup for version 3

 Final graphic design for version 3

Final graphic design for version 3

 Calibrating the kinect for version 3

Calibrating the kinect for version 3

 Calibrating the kinect for version 3

Calibrating the kinect for version 3

 Construction of hardware setup for version 3

Construction of hardware setup for version 3

 Our impromptu display of people's drawings with version 3

Our impromptu display of people's drawings with version 3

 People playing version 3

People playing version 3

 Demoing version 2 in Wieden+Kennedy's atrium.  Demoing version 2 in Wieden+Kennedy's atrium.  Demoing version 2 in Wieden+Kennedy's atrium.  Things people drew while demoing version 2 in Wieden+Kennedy's atrium.  Me with version 2 at an art tech show in Portland  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  People interacting with version 2 at an art tech show in Portland.  Behind the technology view of people interacting with version 2 at an art tech show in Portland.  Demoing version 2 along with new sound design at the beginning of starting work on version 3  Architectural diagram of the new setup for version 3  Graphic design for version 3  Construction of the setup for version 3  Final graphic design for version 3  Calibrating the kinect for version 3  Calibrating the kinect for version 3  Construction of hardware setup for version 3  Our impromptu display of people's drawings with version 3  People playing version 3

 

As much as possible the system is designed to be simple and fun to jump into and yet afford users a sandbox to play and explore both the visual and audial.  So for example, a line drawn on a piece of paper would, as you might expect, become a string on a guitar.  And a rectangle would, as you might expect, become a piano key.  

These provide good building blocks.  But at the same time, there is some open ended randomness associated with images of certain sizes and shapes so that people are surprised by the sounds that might come out of their creations and encouraged to draw all sorts of weird things.  

After watching people interact and play with soundsketch for over three years now, I can say that these two major goals have been borne out.  As soon as people begin tinkering they are delighted to see pen and paper begin doing things pen and paper don't normally do.  But they also instantly grasp the basic rules for drawing their own musical instrument.  At the same time though, they often draw and strum their ways into pretty funky territory.  They explore.

If you'd like to talk in depth about the design or engineering of any part of soundsketch just let me know! Briefly, the first version ran on my 2012 Mac Book Air and used Processing and Java; the second version ran on a 2014 Mac Book Pro and used C++ and the Cinder creative coding framework; and the third version ran on two 2015 Mac Pros and used Clojure, C++, Processing, Cinder, and Ableton Live.  All versions used OpenCV.

In my spare time, I'm working on a related but new concept that is slightly more mobile and social!

 

 

March 31, 2016 /Jonathan Streater

needybot

March 30, 2016 by Jonathan Streater

needybot is a robot that needs help to do anything.  To get to where it's going; to meet new people; to figure out what's funny; to survive.  And in this sense, it's the most human robot ever.

The Idea

It's an idea that came to me when I was wandering around Wieden+Kennedy, daydreaming about how I could get a mobile robot to be able to go up and down on the elevator by itself. 

Dan Wieden spending some quality time with needy.

Dan Wieden spending some quality time with needy.

Unfortunately, that's a really hard engineering problem.  The robot needs to:

  1. navigate to the elevator
  2. push the elevator button
  3. get into the elevator before it closes while avoiding people
  4. push another elevator button
  5. detect when it's on the right floor
  6. get out of the elevator again

And that's even just in the most general, high level terms.  All of that requires a whole lot of coordination between the robot's sensors, wheels, and button pusher arm.  Really, really complicated stuff.

But what if the robot didn't have to solve all of those problems itself?  What if it could lean on humans instead? It could hangout next to the elevator and when a human passes, it could just call out for help with pushing the button and getting into the elevator.  Suddenly, my job as an engineer is ridiculously simpler and these complicated engineering problems are transformed into less complicated social problems.

An early version of needy on his charger, wearing his 3d printed skeleton
An early version of needy on his charger, wearing his 3d printed skeleton
A version of needybot asking you if a task was successful.
A version of needybot asking you if a task was successful.
 An early hacked together needy sitting on a drop zone, a place where people can put needy so he knows exactly where he is.

An early hacked together needy sitting on a drop zone, a place where people can put needy so he knows exactly where he is.

needy with his fur and handle/headphones.
needy with his fur and handle/headphones.
naked needy posing for cameras
naked needy posing for cameras
Less naked needy being posed for cameras
Less naked needy being posed for cameras
needy green screen
needy green screen
more needy posing.
more needy posing.
The view of needy from above, though without his headphones.
The view of needy from above, though without his headphones.
needy following the intern around...
needy following the intern around...
needy's skeleton next to needy.
needy's skeleton next to needy.
This was the day we first got the robot, the ipad with animations, and needy's voice all working together.  And then we dressed him with a hoodie...
This was the day we first got the robot, the ipad with animations, and needy's voice all working together. And then we dressed him with a hoodie...
Early sad, naked needy.
Early sad, naked needy.
Filming some of needy's capabilities
Filming some of needy's capabilities
needy in the atrium being shown off.
needy in the atrium being shown off.

There's something incredibly fascinating and deep here. This isn't just about solving engineering problems anymore.  We're asking humans to help a robot.  We're asking them to be curious about and empathize with a machine that, in the media, is better than them at everything.  It's taking their jobs, beating them in games, murdering them, and destroying the world.  It's scary.

But ironically, the core of this particular machine is perhaps the most human thing there is: empathy.  A human's ability to do much of anything --to eat, to learn, to have success in life generally-- depends largely on its ability to connect with people and to plug into the human network. And now, with needybot, this is starting to be true for robots too.  Just like anybody, needy's survival depends on his ability to connect with people and for them to care about him.

It's a prescient cultural statement.  And it turns the history of Artificial Intelligence, a field that started in the 1950's, on its head.

But as i was kicking many of these ideas around, I would emphatically tell my co-workers about it. About a robot that needs help and about some of the "needy scenarios" that the robot could be in.  And every time people's eyes lit up.  Every time, people seemed to get excited about something very interesting in this idea.  By the time I told my boss about it and he asked me if I was interested in making it a major project for our department, most of my teammates already knew all about this thing I called needybot and wanted to help make it real!

So how do we engineer and design an organism that people care about?  How do we build a machine that actually needs help and is able to ask for it?  And how do we create an experience that at every point effectively communicates some of these grand human ideas about empathy, vulnerability, friendship, and self-actualization?  

Essentially, how do we use robots as a medium?

Concept and Design

This isn't something anyone has done much of so we started with design sprints to focus the idea on an experience that everyone bought into.  I briefed a handful of designers and developers on my idea, gave an overview of the field of social robotics (mostly via Cynthia Breazeal's work), and introduced some of the tools we would likely use such as Robot Operating System and Turtlebot 2.  

After that initial briefing, we had a number of addition design sessions and sprints, all focused on many of the different work streams required to make needy whole. Topics included figuring out what the one sentence was that described the intent behind the idea, figuring out needy's voice as a character, exploring different interaction scenarios and activities for needy, and breaking down needy's different communication strategies.

 Concept art that a designer gave me during the project.  It will be on my wall shortly!

Concept art that a designer gave me during the project.  It will be on my wall shortly!

 The very first pic I took of our very first design sprint!

The very first pic I took of our very first design sprint!

 Our wall of cards containing any questions our team had about needy during our design sprint process

Our wall of cards containing any questions our team had about needy during our design sprint process

 Some early 3d printed models of various physical designs for needy.  This was when we were still thinking about 3d printing one whole exterior instead of using the skeleton approach.

Some early 3d printed models of various physical designs for needy.  This was when we were still thinking about 3d printing one whole exterior instead of using the skeleton approach.

 The final needy skeleton after being painted.

The final needy skeleton after being painted.

 An early attempt at what a website would look like that shares needy's status.

An early attempt at what a website would look like that shares needy's status.

 A designer had to painstakingly put together the skeleton like a lego set.

A designer had to painstakingly put together the skeleton like a lego set.

 Early models.

Early models.

 Explorations of needy's face and animation.

Explorations of needy's face and animation.

 3d model

3d model

 An about to scale paper prototype of what needy might look like.

An about to scale paper prototype of what needy might look like.

 We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

 My dog Brutus (the neediest of all bots) in front of a whole lot of needybot concept explorations.

My dog Brutus (the neediest of all bots) in front of a whole lot of needybot concept explorations.

 Wireframes for various features that we were exploring.  This is me and some of the engineers using post-it notes to try to start breaking down how to engineer things.

Wireframes for various features that we were exploring.  This is me and some of the engineers using post-it notes to try to start breaking down how to engineer things.

 We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

 Another group exercise we did for developing needybot's character and voice.

Another group exercise we did for developing needybot's character and voice.

 A breakdown of different questions we had as we worked and how we might answer them.

A breakdown of different questions we had as we worked and how we might answer them.

 A list of assumptions including Asimov's laws for robots.

A list of assumptions including Asimov's laws for robots.

 We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

 An early attempt at what a website would look like that shares needy's status.

An early attempt at what a website would look like that shares needy's status.

 We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

We broke into teams involved at least one designer and one engineer and then sketched out a more involved storyboard of a day in the life of needy.

 Concepting design sprint exercises exploring some of needy's possible features.  We wanted to explore all possible territory.

Concepting design sprint exercises exploring some of needy's possible features.  We wanted to explore all possible territory.

 Concepting design sprint exercises exploring some of needy's possible features.  We wanted to explore all possible territory.

Concepting design sprint exercises exploring some of needy's possible features.  We wanted to explore all possible territory.

 Just a breakdown that I particularly like that connects the idea to the possible executions.

Just a breakdown that I particularly like that connects the idea to the possible executions.

 More needy questions

More needy questions

 Some of our attempts to wittle down needybot to a sentence.

Some of our attempts to wittle down needybot to a sentence.

 Covering the turtlebot in paper as he wanders.

Covering the turtlebot in paper as he wanders.

 More needy concepts!

More needy concepts!

Through all of this I came to realize that a great lens to look at needybot with is as a game. He's not in a virtual world and it doesnt involve shooting anything or collecting coins.  But thinking about how you make an interactive robot is very much the same as figuring out how you make a game.  And many of the conceptual and technical tools apply, and are extremely helpful, for both.

For example, what are the interaction mechnanics for needy?  What can needybot actually do and how can he communicate?  How do we connect engineering a robot to interesting design that people actually understand and are able to interact with?  I especially love the idea of an interaction mechanic.  It ties together the empathy required for thoughtful and effective design with the engineering required for complex functionality.

We arrived at several core interaction mechanics for needy:

  1. touching needy's face (a tablet)
  2. needy doing facial recognition
  3. needy following people around
  4. needy being able to say prerecorded things
  5. and needy navigating around the building.  

With these core engineering problems solvable and nailed down, we began to be able to see how to design a living organism. One that is intelligible to people.  And one that people might be able to care about and take action for.  

I was central in this part of interaction design for needy; figuring out how to connect robotics technologies to interaction mechanics that people can actually understand and interact with.  And as we were always racing to code and build as we designed, my goal was to always and as much as possible put needy in front of people.  I did this informally very often, observing and interviewing with the goal of finding pain points and looking for opportunities to take back to the next engineering iteration.  But I also organized and conducted a slightly more formal user study that involved a specific helping task that I ran about 20 people through.  I tracked a few quantitative variables such as the time it took to complete the task and counting specific kinds of problems.  But mostly I focused on observing interactions, asking people to narrate their thoughts in interactions, and prying into what they thought about things afterwards in a semi-structured interview.  

Another key part of needy that I was less directly involved with was his physical design. A team including UX designers, motion designers, and engineers worked to design and fabricate needy's physical build to be both functional and adorable.  Needy's interior was custom designed to hold all of his innards safely in place, his eyeball look and motion was designed to easily convey needy's emotions and current thoughts, and his skeleton was designed and 3d printed to hold his fuzzy, adorable exterior in place.

Engineering

For the majority of the project, I worked with three other engineers to build needybot.  One focused on the tablet that is needy's face which involved handling communications from needy's robotics control software as well as face recognition and face animations.  And three of us worked on needy's core software and hardware.  

This mostly involved using Python and the Robot Operating System to manage needy's perception, controls, and interactions.  We used the Turtlebot 2 as needy's platform so at his core he is a small mobile base that can turn in place, and has various infrared floor and motion sensors, as well as a 3d depth sensor. We also ended up attaching a heat sensor with a raspberry pi when we realized that needy's follow behavior wasn't quite as good as we wanted.  Other than that, needy's core software is a series of distributed, event-based nodes that process information (such as incoming sensor data, messages coming from the ipad, or needy's current state) and pass along the results to whatever might need it whether that's another node or some system external to needy.

 A visualization of needy's view via his depth camera.  The red spheres are his attempt to identify legs and the bigger green sphere is his estimate for where a person should be.

A visualization of needy's view via his depth camera.  The red spheres are his attempt to identify legs and the bigger green sphere is his estimate for where a person should be.

 This is me using ROS to map a corner of our building to needy could navigate to different areas by himself.

This is me using ROS to map a corner of our building to needy could navigate to different areas by himself.

 This is me working with the turtlebot just after unpacking and setting it up.  The tiny laptop would be replaced twice later.

This is me working with the turtlebot just after unpacking and setting it up.  The tiny laptop would be replaced twice later.

 A messy build of needy that shows much of his innards that hold things in place including hsi computer, heat sensor, ipad mount, speakers, and mic.

A messy build of needy that shows much of his innards that hold things in place including hsi computer, heat sensor, ipad mount, speakers, and mic.

 This is me playing some of needy's recordings.

This is me playing some of needy's recordings.

 Bolting together needy's interior.  The headphones are meant to be a handle for people to more easily help with.

Bolting together needy's interior.  The headphones are meant to be a handle for people to more easily help with.

 Needy's fuzzy exterior close to completion.

Needy's fuzzy exterior close to completion.

 My notes on breaking down how the state-machine works that defines all of needy's states and how they flow from one to another.

My notes on breaking down how the state-machine works that defines all of needy's states and how they flow from one to another.

 The notes I started making when first developing needy's human following behavior.

The notes I started making when first developing needy's human following behavior.

 Random engineering notes from the team.

Random engineering notes from the team.

 The behavior tree we were using for a while to define needy's states.  We moved away from this approach pretty quickly.

The behavior tree we were using for a while to define needy's states.  We moved away from this approach pretty quickly.

 A high level breakdown of needy's basic behavior control.  It's basically a queue of activities and needs.

A high level breakdown of needy's basic behavior control.  It's basically a queue of activities and needs.

 The state machines that define needy's behaviors.  Complicated....

The state machines that define needy's behaviors.  Complicated....

 A super early architecture for needy and how he connects to the website and his home.

A super early architecture for needy and how he connects to the website and his home.

 An engineer rewiring needy's power plug.

An engineer rewiring needy's power plug.

 Brutus with the turtlebot.

Brutus with the turtlebot.

 Engineers labeling designs with requirements.

Engineers labeling designs with requirements.

 Engineers labeling designs with requirements.

Engineers labeling designs with requirements.

As always, if you'd like to discuss any of the engineering or design details in depth please don't hesitate to ask anything.  Needybot should be released by April 2016. By then there will be much more available in the way of polished videos and information about needybot and what people are doing with him as he lives his life alongside the inhabitants of Wieden+Kennedy!

March 30, 2016 /Jonathan Streater
boardingpass_shot.png

Verizon's Boarding Pass

March 28, 2016 by Jonathan Streater

Boarding Pass is a novel retail experience that helps customers shop for phones and data plans. It makes shopping for invisible things like data a tangible and fun experience by giving the customer something to hold and walk around with.  

As customers wander around the store, cameras track the card they are interested in and phones react visually according to how close they are and which direction they are approaching from.  And when they get close enough to a particular phone, it lights up with pertinent information. I built the computer vision side of this project using C++, Cinder, and OpenCV.

 This gives you a view of me developing the computer vision for boarding pass.  I put wax paper over the infrared sensor on a kinect 2 to make it easier for my CV algorithm to differentiate between IR markers on the cards and everything else in

This gives you a view of me developing the computer vision for boarding pass.  I put wax paper over the infrared sensor on a kinect 2 to make it easier for my CV algorithm to differentiate between IR markers on the cards and everything else in visually noisy environments.

 A very rough breakdown of my algorithm for a colleague!  It's just some OpenCV and trigonometry.

A very rough breakdown of my algorithm for a colleague!  It's just some OpenCV and trigonometry.

Screen Shot 2016-04-12 at 5.55.06 PM.png
Screen Shot 2016-04-12 at 5.54.51 PM.png
 This gives you a view of me developing the computer vision for boarding pass.  I put wax paper over the infrared sensor on a kinect 2 to make it easier for my CV algorithm to differentiate between IR markers on the cards and everything else in  A very rough breakdown of my algorithm for a colleague!  It's just some OpenCV and trigonometry. Screen Shot 2016-04-12 at 5.55.06 PM.png Screen Shot 2016-04-12 at 5.54.51 PM.png
March 28, 2016 /Jonathan Streater

Coca-Cola's ListCheckr

March 28, 2016 by Jonathan Streater

ListCheckr is Santa's proprietary data analysis software that he uses to help determine whether you've been naughty or nice this year.  It logs into your Facebook account, analyzes your behavior from the last year, and generates an animated data story for you, explaining your judgement. The focus of the experience is to tell an interesting story about you that is based on real data and that is unique to you.

Naughty/nice judgements are decided by calculating metrics of your behavior that we created and then comparing how you related to the rest of the population.  For example, if you use many more caps lock words than most people, you'll be found naughty.

We created many more naughty/nice metrics for analyzing your behavior than we actually visualize in the animation that we generate. But we factor in all of the metrics results to determine a verdict and then generate a video from the metrics where the person deviates the most from the norm of the population.  So in your story, weirdos are naughty and you get to see why.

It was available online and at a special vending machine in LA.  

Other than the executive producer, I'm the only person on the project who worked all the way from concept, through design research and prototyping, to production and QA.  I worked with a writer and UX designers to create naughty/nice metrics by connecting human understandable and entertaining judgements of your behavior with real data in your facebook profile; I worked with another engineer to build a Python prototype to test our metrics and collect a sample data set for calibrating our data anslysis; and I worked with a Java engineer and QA to create a highly scalable system able to be bombarded by many people in a short period of time.

photo (1).JPG
image1.PNG
coke_holiday_0009_Load-stall.png
coke_holiday_0010_End-screen---your-results.png
coke_holiday_0007_Load-b.png
coke_holiday_0008_Load-c.png
coke_holiday_0005_FB-Permission.png
coke_holiday_0006_Load-a.png
coke_holiday_0003_Zoom-in.png
coke_holiday_0004_Initial-app-screen.png
coke_holiday_0000_Intro-a.png
coke_holiday_0001_Intro-b.png
coke_holiday_0002_Santa's-phone.png
photo (1).JPG image1.PNG coke_holiday_0009_Load-stall.png coke_holiday_0010_End-screen---your-results.png coke_holiday_0007_Load-b.png coke_holiday_0008_Load-c.png coke_holiday_0005_FB-Permission.png coke_holiday_0006_Load-a.png coke_holiday_0003_Zoom-in.png coke_holiday_0004_Initial-app-screen.png coke_holiday_0000_Intro-a.png coke_holiday_0001_Intro-b.png coke_holiday_0002_Santa's-phone.png
March 28, 2016 /Jonathan Streater

TurboTax's Archie

March 27, 2016 by Jonathan Streater

Archie is a friendly tax expert who is with you all year long, helping you understand and take care of your financial well-being.  He is a chatbot connected to various services and knowledge bases that can make sure you understand money and tax situations as they happen.  

And that was the brief that we started with as creative team that included two developers and two designers.  To develop a tool that would allow TurboTax to build relationships with their customers all year long rather than as one urgent, anxiety-filled sprint per year in tax season.  

We designed Archie to be a natural language (NL) interface that is both delightful to interact with as well as useful at the right time and place.  So he can tell jokes but he can also notice if you're on a business trip and help you record your receipt, for example.  

NL interfaces are powerful because everyone already understands natural language.  But they are a challenge to design and develop because even state of the art NL algorithms fall well short of the language abilities of people. And to make things worse, when NL interfaces fail, people don't usually understand why or how because these types of interfaces don't provide much transparency into the mechanics of how they work.  So with this in mind, we focused hard on making an interface that makes sure you understand what you can say at all times and if there is a disconnect, can directly address and repair that miscommunication.  

 Helping keep track of receipts for you.

Helping keep track of receipts for you.

 Aggregating all your interactions from there year into a mostly done tax return.

Aggregating all your interactions from there year into a mostly done tax return.

archie_bigwin1.png
archie_bigwin2.png
archie_bigwin3.png
 A very high level architecture for how our system ties together incoming messages from clients, tax facts that we've encoded, and our dialog trees.

A very high level architecture for how our system ties together incoming messages from clients, tax facts that we've encoded, and our dialog trees.

 Helping keep track of receipts for you.  Aggregating all your interactions from there year into a mostly done tax return. archie_bigwin1.png archie_bigwin2.png archie_bigwin3.png  A very high level architecture for how our system ties together incoming messages from clients, tax facts that we've encoded, and our dialog trees.

With the help of writers, we developed a dialog system that uses trees of dialog for defining everything Archie can possibly say, pattern matching for matching incoming language forms within the context of the current overarching conversation, and a probabilistic parser to decipher things you say as nouns and verbs so that they can be used for matching.  We built most of our service in Python.  However, we used Clojure to define tax rules and we used Wit.ai (since bought by Facebook) for its probabilistic parser and handy interface for prototyping new dialogs.

My role in this project spanned concepting, researching and introducing the field of Natural Language Understanding based interfaces to my teammates, prototyping, and connecting our design considerations to our natural language capabilities in production code.  I also managed secondary and primary research in the domain of taxes and finances.  This involved a whole lot of digesting information about our target audiences (for this prototype we focused on simple filers and freelancers who run a home office) and organizing and interviewing tax experts.

March 27, 2016 /Jonathan Streater