The basic premise of “deep learning” is that you process big pools of data to try and find “good” and/or “bad” patterns. After you build up a set of trained data, you can compare new data against it to accomplish some goal.
Other vendors, such as Microsoft and their IntelliCode system, have been using deep learning to assist in software development. It’s an interesting premise that, along with unit tests, static code analysis, and so forth, should increase the quality of code.
Personally, I’m one of those people that regularly use static code analysis (if the platform has a good and affordable solution available). It’s good to follow strong design patterns, but it’s hard to recover from the “broken window theory” once you get a few hundred static code analysis warnings… or a few hundred compiler warnings. Apathy just sets in and I just end up ignoring everything from that feedback level, down. It pushes me to, if I can control a project from scratch, keep it clean of warnings and code analysis issues.
All that is to say – it’ll be interesting to see how Clever-Commit is adopted. Since it’s apparently on a per-commit basis, it shouldn’t be bogged down by past mistakes. I wonder if we can somehow add that theory to other forms of code analysis. I’m curious what sort of data we could gather by scanning from commit to commit… what that would bring in terms of a wholistic view of code quality for various projects.
And then… what will happen when deep learning starts generating code? Hmm.