Week 22: The Chatbot and the UI
- ainergyy
- Apr 12, 2022
- 2 min read
Updated: Apr 24, 2022
In this week the team started to implement the Chatbot and the dialog manager for the Chatbot. Basically, the image shown below represents all the actions that the chatbot has in consideration. So, the chatbot can:
Respond to greetings, appreciations and farewells
Explain variations in consumption, flexibility and generation
Quantify the time, regarding how long a device can still be on or how much generation, consumption or flexibility will be available for given time of the day
Receive feedback from the users regarding the management of the energy, both positive and negative
Recommend devices or cars based on their properties like the the energy capacity or the car range.
All these functionalities are implemented as dummies for the time being since the actual systems behind these functionalities don't exist at the current time but the project could easily be adapted to deal with the real systems. Also, despite the diagram give the idea that the conversation has only one flux of information that is not the case, the diagram tries to represent only what type of functionalities the chatbot has, but in reality it is a multi turn conversation and in a conversation several of these functionalities might be deal with. This means that a conversation like Greeting -> Quantify -> Generation -> Thanks -> Feedback -> Goodbye is possible.
Another detail mentioning pointing out is how the chatbot deals with contexts, that is, how the bot can keep track of what is being treated in the conversation. Since the chatbot is implemented as a web service and receives requests from a UI, the request will contain context regarding the last sentence of the conversation, the intent and the slots identified so far, so that the chatbot can understand if the intent has changed meanwhile or the if the user is simply providing information to the current intent.

As for the User Interface, they say an image is worth a thousand words, and we say a video is worth a thousand images (literally).
The UI uses the Dialog Manager service to interpret each message receiving its classifications as well as a response message to be displayed. The classifications are then included in the following message, for the Dialog Manager to use as context.
The interface also provides a debug functionality that allows the user to clearly see what the bot interpreted as well as report ill-classified messages, that are saved in the server for further inspection. Finally it also allows for both speech-to-text and text-to-speech utilities.
Comments