Using GPT-3 for IOT Automation
Waylay is a low-code platform that allows developers to apply enterprise level automation anywhere. Hook up sensors, push data and start enjoying the benefits of low-code automation.
Automation rules are the core of the Waylay platform. Developers write small code snippets (or use pre-existing ones) and chain these together with logical operators to define automation rules. Think about rules as something that allow you to turn on the water sprinklers if it has been sunny without rain for 3 days, or schedule an inspection for an industrial machine if an anomaly is detected on one of its many sensors. By chaining these rules together, we can create arbitrarily complex automation software.
Making this automation technology accessible to everyone is one of Waylay's core values. Imagine if we could simple interact with this automation engine over voice or text control, in a natural fashion. This is where NLP comes in. Instead of having to interact with a computer in the typical manner, we can foresee a factory worker asking their machine "What is the temperature of oven 5?" or telling it "Raise a critical warning if the temperature of the freezer rises above -10 degrees and the door is open".
Getting this right certainly isn't easy. Human-spoken rules can carry a lot of ambiguity and require a lot of intelligence to correctly parse and translate into Waylay automation rules.
If we want to build a deep learning solution based on 'traditional' methods, we have a few problems to deal with. Primarily, we are dealing with a lack of data. In order to robustly parse human utterances and capture the necessary information to translate them into something the Waylay system can understand, we would need a large amount of data spanning different ways of speaking and different types of Waylay rules. This data is currently not available. Even if we would have this data, our model would need to be retrained every time we want it to serve a new manner of speaking or a new type of Waylay rule.
We turn to prompt engineering to solve our problems. If we can use GPT3 to do the hard work for us, we can build a highly data efficient system which does not need to be retrained to deal with new cases. How nice would that be?
The question now becomes "How can we leverage the capabilities of GPT3 to do the dirty work for us?". Unfortunately, it is very hard to teach GPT3 to output the correct internal data structure Waylay requires based on a natural language input. Luckily, we can work our way around this with a clever hack (for which we have to credit the smart folks over at Microsoft ). In our solution, we will let GPT3 output a canonical sentence. This sentence holds the same information as our natural language input, but in a more structured fashion. For example, the utterances 'send David a message telling him to drive safe when it is raining in Paris' and 'only when the weather in Paris is raining tell David "Drive safe!" via sms' can both be reduced to the canonical sentence 'if weather in Paris is raining, then send SMS to David with message "Drive safe!"'.
By rephrasing our semantic parsing task to a translation task, we were able to leverage large pre-trained language models (GPT3) to do all the hard work for us. Our solution works with extremely few data points, can easily be adapted towards new situations without retraining and we don't even need to host the deep learning model ourselves. Because of the strong capabilities of GPT3, our solution shows remarkable generalization towards unseen scenarios (and even unseen languages!).
Read the full blog post at https://www.waylay.io/articles/nlp-case-study-by-waylay Author: Karel-D'Oosterlinck