Interview: How Contextual User Experiences Are Optimizing Services
In this interview, CEO and co-founder of Talla Rob May discusses how his company is using big data and artificial intelligence to help businesses improve the user experience for their consumers
As technology and its applications have expanded, so have consumer demands for streamlined user experiences and workflows. One major way that businesses can provide easy-to-use services is by automating the management of their content, but sometimes how to do so isn’t clear.
Enter Talla, a New England company dedicated to helping businesses more efficiently make use of their data by outsourcing the handling of it to artificial intelligence. In this interview, also available as a podcast, CEO of PSFK Piers Fawkes speaks to Rob May, CEO and co-founder of Talla, about how his company uses a smart knowledge base to automate content for business as well as how it is implementing machine learning and natural language processing techniques to create a chatbot that helps new workers get up to speed and be more productive.
Piers: You’ve been pioneering the use of data through AI and other technologies to create contextual user experiences. What are some of the broader trends that you see happening in the industry?
Rob May: We work in the AI industry and the bot industry, and the rise of big data has made it possible to give context to things that didn’t have context before. Now you can do a lot more personalization. You can know a lot more about somebody. You can understand information in a situational manner and in real time. I think those are important trends. People are always looking to improve decision making, prediction and automation, which AI can do.
I have talk that I give sometimes at conferences called the PAC Framework. It stands for Predict, Automate, and Classify. Those are the three most common things that people do with AI deployments. A lot of it is made possible because we have big, unstructured data, which is allowing us to predict things with better context.
What are people trying to get out of this?
I’ll give an example. We have a customer called MongoDB, the database company. Mongo used our product to create a contextual chatbot for its human resources team. When people have a question about benefits, for example, we have access to a lot of their data repositories that explain these processes and procedures, how to do certain things and what the policies are. We know some things about the different employees there—we know their office location, their status, if they are full‑time or part‑time, etc. We can take that information and then we can pull the right information from their benefits policies to give them an in-context answer. Maybe an engineer in Dublin and an accountant in San Francisco have different benefits based on different factors including the legal structure of those companies and the countries or offices they’re in. We can factor a lot of that in and give people better answers rather than making them dig through a pile of information to figure out what’s relevant to them.
How do you describe the service you offer, and what are the benefits?
We offer two things, really. We offer a knowledge base that is much like Confluence, or Google Docs, or something like that that you would put information into. The difference is, when you put information into ours, we understand it. We have artificial intelligence in there that can apply context to the data.
We can do inference on the data, which means we can answer questions about it that aren’t explicitly stated in it. Then we have a bot platform that wraps around that. What that allows you to do is create bots that do things based on the information that you put in the knowledge base.
You can automate certain pieces of your work and you can use them for information retrieval—to answer basic questions. People use them for things like onboarding new employees. Anything that’s an information‑related task or a communication‑related task that you want to automate is where we specialize.
Is there any particular approach you have that keeps people’s data secure, and any other thoughts about what’s happening in that space?
One of the things that we’ve done from the beginning is acquire a certification called SOC 2, which says that we have certain procedures and processes in place that we follow that do a lot to protect the data. We use all the best practices around security. The challenge for artificial intelligence, context and data and privacy is that you need this data to train your neural network model. What’s interesting is you can’t get the data back out of the model. If I use a big pile of data, and I train a model, and then I get rid of the data, the model is dependent on the data but I can’t get the data back out of the model.
One of the questions that we had to navigate early on is when a person stops being a customer or wants us to not have access to their data anymore. Do we need to delete their data from the data set, and then retrain our model? Or, can we just delete their data so that nobody has access to it, but we can continue to use this model that was partially trained on their data? It’s a really interesting conundrum for some companies. A lot of it depends on what your model does and what you’re trying to do with it. In general, companies have been pretty OK about letting us keep derivatives of the data.
There aren’t a lot of industry standards on it at the moment. It’s a problem if you’re going to train models on certain data because you have to keep a trace of how you did that and what you did. Then, the model also has to be interpretable.
Where’s the next big opportunity when it comes to the use of data to create contextual services?
One of the theories behind Talla is that if you think about the work that a worker does, you can divide it into three parts. At the bottom part, you have work that you could script away. Much of that has been scripted away by software. At the top you have complex, cognitive, strategic, creative work, which, depending on your view of artificial intelligence, machines may never do. Or even if you think machines will do them, it’s decades away. Then in between, you have a bunch of tasks that we think a machine could do if we had a data set around it.
I think one of the trends that you’re going to see is in enterprise interface software design. I think the last two waves were about moving everything to the cloud and then making everything more consumer-friendly—making it more social, allowing tags, avatar, etc. I think the next wave is going to be about capturing small pieces of data that are going to allow you train machine-learning models. That’s going to allow you to provide more context. That’s going to allow you to build smarter tools. This trend that you’re going to see is interface design in ways that help you capture more data about why somebody did something, at least on the enterprise side.
On the consumer side, I think you might see the reverse. You might see trends where consumers are going to start being more serious about access to their data. Those interfaces might even move in the other direction. Or, if they continue to capture more information, they might move to models that at least have tighter consumer control over what can be done with that data.
For more from Rob, listen to our podcast. For information about other ways that businesses are optimizing and streamlining their services by improving the user experience, read PSFK’s newsletters and download our reports.