AI Can Write Code Like Humans – Bugs and All


Some software developers allows now artificial intelligence help writing some code. They see that AI is just as wrong as humans.

Last June, GitHub, a subsidiary of Microsoft provides tools for hosting and collaborating on code, released a beta version of a program that uses AI to help programmers. Start typing a command, a database query, or an API request, and call the program Copilot, predict your purpose and write down the rest.

Alex Naka |, a data scientist at a biotech company that signed up to try Copilot, says the program could be very helpful, and it changes the way he works. “It allows me to spend less time jumping into the browser to find API documents or Stack Overflow examples,” he says. “It seems a bit like my job has shifted from being a code maker to being a bias on it.”

But Naka finds that errors can creep into his code in different ways. “There were times where I made a different subtle mistake when I accepted one of its suggestions,” he said. “And it can be really hard to keep track of it, probably because it makes mistakes that have a different flavor than I want to do.”

The risks of AI generating faulty code can be surprisingly high. NYU researchers recently analyzed the code generated by Copilot and found that, for specific tasks where security is important, the code contains security errors about 40 percent of the time.

The number is “a bit smaller than I expected,” he said Brendan Dolan-Gavitt, an NYU professor who co -authored the analysis. “But how to train Copilot is not to actually write good code – so that it can create a class of text that follows a given direction.”

Despite the flaws, Copilot and similar tools used in AI could announce a sea change in the way software code is written. There is a growing interest in using AI to help automate more mundane work. But Copilot also reveals some pitfalls in today’s AI techniques.

While reviewing the code made available for a Copilot plugin, Dolan-Gavitt found to be it accompanies a list of prohibited phrases. It was apparently introduced to prevent the system from releasing annoying messages or copying well -known code written by others.

Oege de Moor, vice president of research at GitHub and one of the developers of Copilot, said security was a concern from the start. He said the percentage of bad code cited by NYU researchers is only relevant for a subset of code where security flaws are likely.

Invented by De Moor CodeQL, a tool used by NYU researchers that automatically identifies bugs in the code. He said GitHub recommends that developers use Copilot in conjunction with CodeQL to ensure their work is secure.

The GitHub program is built on top of an AI model created by OpenAI, a renowned AI company doing the most recent work on machine learning. That model, called the Codex, consists of a large artificial neural network trained to predict the next characters in both text and computer code. The algorithm targets the billions of lines of code stored on GitHub-not all of it is perfect-to figure out how to write the code.

OpenAI builds its own AI coding tool on top of Codex capabilities do some shocking coding tricks. It can turn a typed instruction, such as “Create a sequence of random variables between 1 and 100 and then return the most of them,” into working code in multiple languages. programming.

Another version of the same OpenAI program, called GPT-3, is possible create cohesive text on a given topic, but it can also float harmful or biased speech learned from the darker corners of the web.

There is the Copilot and Codex some makers have asked if AI can automate them from work. In fact, as Naka’s experience shows, developers need a lot of skill to use the program, because they always have to check or tweak its suggestions.





Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *