The basic premise of “deep learning” is that you process big pools of data to try and find “good” and/or “bad” patterns. After you build up a set of trained data, you can compare new data against it to accomplish some goal.
In this case, Mozilla is using it to scan commits to the Firefox codebase as a form of automated code review. The system was originally developed by Ubisoft as Commit Assistant, which they have been using as a form of code analysis. Mozilla has since partnered with them, and will contribute to its ability to scan C++, JavaScript, and Mozilla’s own Rust language.
Other vendors, such as Microsoft and their IntelliCode system, have been using deep learning to assist in software development. It’s an interesting premise that, along with unit tests, static code analysis, and so forth, should increase the quality of code.
Personally, I’m one of those people that regularly use static code analysis (if the platform has a good and affordable solution available). It’s good to follow strong design patterns, but it’s hard to recover from the “broken window theory” once you get a few hundred static code analysis warnings… or a few hundred compiler warnings. Apathy just sets in and I just end up ignoring everything from that feedback level, down. It pushes me to, if I can control a project from scratch, keep it clean of warnings and code analysis issues.
All that is to say – it’ll be interesting to see how Clever-Commit is adopted. Since it’s apparently on a per-commit basis, it shouldn’t be bogged down by past mistakes. I wonder if we can somehow add that theory to other forms of code analysis. I’m curious what sort of data we could gather by scanning from commit to commit… what that would bring in terms of a wholistic view of code quality for various projects.
And then… what will happen when deep learning starts generating code? Hmm.
“what will happen when deep
“what will happen when deep learning starts generating code?”
stackoverflow.com will shut down…
… but what if it’s
… but what if it's stackoverflow.com itself?
I don’t care! I just want my
I don’t care! I just want my Flying Car and that Snarky Max Headroom AI gitting all Antisocial-Network up in everyone’s Facebook and Twitter Feeds!
Hi, this is Max-Max Headroom on Network 23, brought-brought-brought to you by… ah… oh, no no no no no no-no-no. I’m sorry, but sorry, but if they think I’m endorsing car accessories, they’ve got another dipstick-stick coming!
Max: Hey Cortana, Ask Siri to run that furby babble through the Goo Goo Goo Gle Google’s GIGO-Plex translator! Oh yes that’s some fa fa fa fine Ad copy you’ll get from that!
Scott what do you make of
Scott what do you make of this(1) for Radeon/GCN 1.1 and later AMD GCN GPUs on Linux?
“Prolific open-source AMD Linux driver developer Marek Olšák has sent out his latest big patch series in the name of performance. His new set of 26 patches provide primitive culling with asynchronous compute and at least for workstation workloads yields a big performance uplift.
The 26 patches allow for using async compute to do primitive culling before the vertex shader process. This work ends up yielding performance improvements for workloads that do a lot of geometry that ends up being invisible. This code is stable and passing nearly all conformance tests while working from GCN 1.1 through Radeon VII.” (1)
currently this optimization is enabled for all professional/workstation graphics cards and maybe they will try this out for games ASAP.
(1)
“RadeonSI Picks Up Primitive Culling With Async Compute For Performance Wins”
https://phoronix.com/scan.php?page=news_item&px=RadeonSI-Prim-Culling-Async-Com
Ah sorry thought I replied to
Ah sorry thought I replied to this a couple of days ago but I must have lost the tab.
Looks cool! I love the concept of open-source, alternative drivers. Unfortunately it's not too common on Windows (although that's more because of the problems that proprietary drivers have on Linux). Can see it being a QA hassle for developers but… meh.
Watch Dogs 3 trailer.
Watch Dogs 3 trailer.
AI is already generating code
AI is already generating code – just Google it and there are many companies doing it. While it’s still at the village idiot stages it’s only a few doubling from super-human. As a software engineer myself i plan to be obsolete in 6-8 years time.
If you plan to be obsolete in
If you plan to be obsolete in 6-8 years as a self claimed software engineer, this simply means you’re a fraud!
AI is already generating code
AI is already generating code – just Google it and there are many companies doing it. While it’s still at the village idiot stages it’s only a few doubling from super-human. As a software engineer myself i plan to be obsolete in 6-8 years time.