BERT

Well that didn’t take long. In my introduction posts I made the quip about trying to shoehorn in some deep learning to the project. Turns out I didn’t have to try too hard and it may actually be a good idea. As part of my literature review (a very important step I neglected last time) I came across a newly released paper from the Google Research team BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Don’t ask me what all of those words mean because I don’t really have a clue but as I understand it the idea is to train up a Transformer in a generic non task-specific way, release it into the world, and then have idiots like me perform the last mile task-specific fine tuning.

Again, I have no idea how to actually do that, or honestly if that’s what the paper says but that’s the gist I got from the few words I did understand. In related news TIL “SOTA” apparently means State of the Art.

So now we have a nifty machine learned general language understanding model, we have to worry about we’re going to do for what I’m calling the last-mile training and what the paper refers to as the ‘fine tuning’. There are a couple of suggested use cases for the BERT models in the paper including Natural Language Inference :shrug:, Question Natural Language Inference :wtf:, Sentence Sentiment Analysis :u-mad: and something called Linguistic Acceptability :idk-nfi:.

Since none of those words make sense to me, I’m going to attack this problem differently. One of the techniques that I ‘discovered’ last time was to establish basic actions, states and object names then decree a grammar for those to make a mini-language. This proved surprisingly powerful and scaled from controlling individual embedded devices up to entire conversations between human and machine. With this little language (working title “LIL: Language Interface Language”) the problem becomes a well trodden (by other people) path for me to follow. It’s a straight up Machine Translation problem that I need to solve.

This has the additional advantage of including a human readable intermediary layer which actually provides some level of explainable AI. That’s huge.