Coding is hard. A new family of AI tools called AI Coding Assistants tries to make this job easier for programmers by translating plain English into working code.
Let’s find how the world’s leading coding assistants – GitHub Copilot, AWS Code Whisperer, and Tabnine – stack up against each other.
Most AI coding assistants integrate natively with the world’s most popular IDEs, such as VS Code or IntelliJ. Nowadays, they also integrate easily with a new breed of development tools: rapid development environments.
Designed to accelerate application development, they cover all parts of the software lifecycle: from database modeling to deployment to development, testing, and production environments.
Since the release of Chat-GPT 3 in late 2022, AI has shown truly remarkable capabilities: from passing bar exams for lawyers to generating code at a quality and speed better than human programmers.
Studies have shown that AI coding assistants can help software developers complete coding tasks twice as fast as developers working without AI.
Lately, a whole wave of such AI Coding Assistants has entered the market.
Which AI Coding Assistant stands out from the crowd? This article compares some of the most popular AI coding assistants, their features, limitations, and popularity: GitHub Copilot, AWS Code Whisperer, and Tabnine.
Last, we check out how developers can benefit from AI inside of Five to rapidly build and deploy web applications, and what the limitations of AI Coding Assistants are.
AI Coding Assistants are software tools that harness the power of artificial intelligence to enhance developers’ coding speed, code quality, and code security. They generate code based on natural language prompts.
Trained on billions of lines of code taken from public code repositories, AI Coding Assistants are based on Large Language Models (LLM), a type of AI algorithm that uses deep learning techniques and massively large data sets.
AI Coding Assistants are not stand-alone solutions that can produce code or systems independently. Instead, they are embedded into the most commonly used Integrated Development Environments (IDEs) and code editors, such as VS Studio Code or the JetBrains family of IDEs (including IntelliJ, PyCharm, GoLand, etc.) as a plug-in or extension.
Once embedded into an IDE, AI Coding Assistants typically offer these features:
To simplify, let’s group these features into three main categories.
Given the power and versatility of AI Coding Assistants, it’s easy to misunderstand what they can and cannot do. To make things easier, the three main features of AI Coding Assistants are:
GitHub’s Copilot is an AI tool designed to improve developer productivity and was first released in October 2021. It was the first LLM tool to be released to the public and
Eddie Aftandilian, Principal Researcher at GitHub Copilot, says that “the idea behind Copilot came from a combination of the existing auto-complete in IDEs that you see, combined with the emerging capabilities of machine learning models.“
GitHub describes Copilot as an “AI pair programmer”: someone who constantly looks over programmers’ shoulders as they write, read, or debug code and gives helpful suggestions in the process.
Trained on billions of lines of code, GitHub Copilot turns natural language prompts into coding suggestions across dozens of languages. For example, a programmer can write a comment inside their favorite IDE to ask for a function that reverses a string, and Copilot will spit out the function to do so.
In more general terms, Copilot is a natural-language-to-code system that helps turn simple English instructions into over a dozen popular coding languages.
The technology powering Copilot is Open AI’s Codex, which was trained on GitHub’s hundreds of millions of public code repositories. Not surprisingly, Copilot generates code best for those languages that are most commonly found on Github, namely Python, JavaScript, TypeScript, Ruby, Go, C++, and C#.
GitHub Copilot is available as an extension in Visual Studio Code, Visual Studio, Vim, Neovim, and the JetBrains suite of IDEs. Pricing starts from US$10 per month or US$100 per year.
As of the time of writing the GitHub Copilot extension had more than 430,000 installs on from the Visual Studio Marketplace.
AWS Code Whisperer, released in November 2022, is AWS’s response to Github Copilot. Code Whisperer was trained on billions of lines of code, including Amazon and open-source code.
Whereas GitHub refers to Copilot as an “AI pair programmer”, AWS describes Code Whisperer as an “AI coding companion”. Both are code generators.
Code Whisperer can currently generate code written in Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala. It can also scan your code to highlight and define security issues.
Code Whisperer is available as part of the AWS toolkit for Visual Studio (VS) Code and the JetBrains family of IDEs, as well as for AWS Cloud9, AWS Lambda console, JupyterLab, and Amazon SageMaker Studio.
As of the time of writing the AWS Toolkit for VS Code had more than 140,000 installs on from the Visual Studio Marketplace. Unlike GitHub Copilot, individual developers can use CodeWhisperer for free. A free AWS Builder ID is required.
Tabnine is an Israeli, venture-capital-backed start-up in the emerging field of AI Coding Assistants. Because of the high cost of training LLM models, start-ups are often considered to be at a disadvantage to larger, more established tech companies, such as Microsoft or AWS. However Tabnine has shown that smaller companies can build competitive products.
Tabnine is an AI-powered code completion tool designed to assist developers in writing code more efficiently. Tabnine autocompletes lines of code, suggests full functions based on function declaration, and generates blocks of code based on natural language comments.
Unlike GitHub’s Copilot or AWS’ Code Whisperer, Tabnine uses smaller “code-native” AI models trained on specific programming languages or areas.
Tabnine’s model can be trained on smaller datasets to capture the specific patterns in customers’ repositories and deployed on-premise. For example, Tabnine can be used by customers who want their own AI Coding Assistant that is specifically trained on their own code repositories, rather than publicly available code.
Tabnine’s Co-Founder and CTO describes Tabnine’s unique approach to the industry like this: “If you want to be a subject in Microsoft’s kingdom, use Copilot. If you want to be your own emperor, use Tabnine”. With Tabnine, you can run an air-gapped model in a closed environment on your own code. You have full control over the code that’s used to train your AI assistant. Of course, this also means that the AI assistant will better understand the intricacies of your codebase, rather than giving generic advice.
Tabnine currently supports Angular, C, C++, C#, CSS, Dart, Go, Haskell, HTML, Java, Javascript, Kotlin, Matlab, NodeJS, Objective C, Perl, PHP, Python, React, Ruby, Rust, Sass, Scala, Swift, Typescript.
The tool is available as an extension to the following IDEs: VS Code, IntelliJ, WebStorm, PyCharm, GoLand, Eclipse, Sublime, RubyMine, CLion, Neovim, PhpStorm, Android Studio, Rider, and AppCode.
As of the time of writing the Tabnine had more than 5.9m installs on from the Visual Studio Marketplace. Tabnine has a free plan that offers basic code completion features.
There is no easy answer to the question of which one of these three coding assistants is the best.
In a discussion thread on Reddit, one user writes that Tabnine “has always been quiet and useful, not very flashy or especially “fun” to use, but I also never ran into problems with it.”. For Co-Pilot, the same user says that it is quite “dangerous”. Why?
“Since Copilot does so much and also well at first glance, you may drop your guard over time and just *believe* what it suggests, without double-checking”.
Is it a problem of the technology or the user if the technology is so good that you tend to trust it almost blindly?
A more objective study of the subject was carried out by Burak Yetistiren et al. in April 2023. In a paper titled “Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT“, the team of authors came to the conclusion that ChatGPT is the best code generator: “The latest versions of ChatGPT, GitHub Copilot, and Amazon CodeWhisperer generate correct code 65.2%, 46.3%, and 31.1% of the time”.
They further observed that Copilot and Code Whisperer improved dramatically between releases, showing that the technology is progressing rapidly.
If you have read this far, you might rightfully wonder why ChatGPT is not included in this review.
The reason we did not add ChatGPT to our list of reviewed AI Coding Assistants is that cannot be embedded into an IDE. Moreover, ChatGPT is developed by OpenAI, the same company that developed Codex, the LLM behind GitHub Copilot.
Based on the information that is publicly accessible about these two tools, there seems to be substantial overlap in their training data, so if you wish to use ChatGPT right inside your IDE, the closest you’ll get is Github Copilot.
Five is an IDE for rapid web application development and deployment. Five helps developers build custom business applications, CRUD applications, internal tools, or line-of-business applications.
Five lets developers extend their applications by writing JavaScript or TypeScript in an IDE-like code editor that comes with handy features, such as syntax highlighting and code completion. On top of this, they can benefit from Open AI’s code interpretation and debugging features right inside Five.
Simply add your Open AI API key to Five, and let AI check your code for errors and bugs, or let AI explain your code to you.
To see Five’s AI features in action, check out these two videos:
1. This video demonstrates how to explain code using AI in Five:
2. Here’s a video of how to use AI to check for errors in your JavaScript or TypeScript functions in Five:
Now that AI can help us write code, where are the limitations? Is the code generated by AI 100% reliable?
The answer is no. Developers need to make sure to use AI Coding Assistants responsibly and be aware of their limitations.
AI does a pretty good job of generating code. But AI Coding Assistants are not perfect and they don’t get 100% of the code that they generate right. This is because of the way that they “learn” how to code.
Large language models, the technology that all of the Coding Assistants reviewed above rely upon, are text generators.
They analyze an incredibly vast dataset of text – in the case of AI Coding Assistants, code – and use this knowledge to predict and generate the next best word in the sequence. But their predictions can be wrong, as they do not apply any reasoning or logic, or try to understand the purpose of code in the way a programmer would do.
For example, “there have been many reports and articles about AI hallucinating [software code] packages that don’t exist, attackers predicting the package names and then creating malicious versions”, says Sara Faatz, Director of Developer Relations at Progress, a software development company.
On the GitHub Copilot website, this problem is addressed with the following statement (highlights are ours):
“GitHub Copilot offers suggestions from a model that OpenAI built from billions of lines of open-source code. As a result, the training set for GitHub Copilot may contain insecure coding patterns, bugs, or references to outdated APIs or idioms. When GitHub Copilot produces suggestions based on this training data, those suggestions may also contain undesirable patterns. You are responsible for ensuring the security and quality of your code. We recommend you take the same precautions when using code generated by GitHub Copilot that you would when using any code you didn’t write yourself. These precautions include rigorous testing, IP scanning, and tracking for security vulnerabilities.”
Another risk of using LLMs is the lack of regulation. Let’s say an LLM was trained on a public repository that subsequently is made private by the owner who no longer consents for their code to be used by the LLM. What then?
In the worst case, the development team using the code would have to rewrite the code, as they are illegally using someone else’s content without their consent. If this sounds far-fetched, it is exactly what is happening in the area of image-generating AI. Getty Images claims Stability AI ‘unlawfully’ scraped millions of images from its site and is suing the company over alleged copyright violations.
Martin Heller, a tech writer says, aptly summed up the impact of AI Coding Assistants on programmers’ work when he wrote:
On top of this, AI can introduce new risks into the development process, such as copyright infringements, or accidentally using hallucinated software packages.
To sum up: large language models and AI Coding Assistants are powerful tools that can help software developers write production-ready code faster. But they come with their own risks and shortcomings.
As the popular saying goes: “AI won’t take your job. The person using AI will.” To expand on this: it’s the person who knows how to use AI responsibly and with foresight. This is especially true in programming.