Deep Learning in NLP
One minor drawback I think with deep learning in NLP is that you take some precise information and convert it into approximate information (during conversion to embeddings, calculations, for the sake of calculations, dealing with high dimensionality etc). Is the information lost in this process worth it? Superior results compared to other methods does say that it is worth it (at the cost of increased training data). It is good for general things such as parsing. But when this is applied to specific domain knowledge, the whole supervised thing becomes unnecessary. It could and should be done without without supervised learning.
After seeing the performance of Dialogflow, I must say that it does look good. But can it handle things like NLIDB? Why has the accuracy of NLIDB on stanford dataset stagnated?
I still believe DL should be supplemented with some other methods. I think that semantic parsers (built using DL) with domain knowledge on top of it is a better solution. This should reduce supervised learning to some extent. There is absolutely no point of pursing supervised learning.
Why am I not working on it currently? Because I would like some consultancy on the whole thing before I jump full on. Also hacking. Plus my startup now.
Comments
Post a Comment