ChatGPT took the world by storm, and for a good reason. Artificial intelligence is a hot phrase among everyone that deals with digital. The product changed the way people see possibilities involved with automation and AI. Is it relevant when it comes to software development? Can it replace human factor? And why the answer is simply “no”?
But first, we have to establish some ground rules. Let’s focus on terminology, since many people fascinated with options of a quick work don’t necessarily understand what ChatGPT really is.
What is ChatGPT?
ChatGPT is a one of the latest conventional chatbots out there. Created by OpenAI, it was released on the market in November 2022. It has a significant market advantage over other products. It uses machine learning tools known as transformers. It gives contextual awareness and produces wider spectrum of responses. For example – a chat could tell you that a car is white because it’s a default color for the producer. With the context, it can explain that you can have and color you want for a price and “default” doesn’t mean you’re doomed to drive it in white.
ChatGPT can generate responses that mimics human intelligence. It provides not only responses but pieces that are actually usable in your daily work. It’s no wonder that in five days after launch, there were over 1,000 000 people trying it for themselves. Especially if it’s free.
And free of legacy code, and legacy thinking. In the past, users were exposed to hate speech and aggression coming from AI models. There’s even a famous meme that sums it up pretty well. AI after a whole day interacting with humans:
Some reasons behind brilliant model lies in numbers. ChatGPT is a part of OpenAI’s family, that leverages 175 billion parameters, tested on 570 gigabytes of text. With that, it can even predict what word will come up next in a sentence.
How does ChatGPT work?
If you have some free time, you can dive into a comprehensive article by Stephen Wolfram. In it, the author and a scientist, explains a lot of mechanics behind the chat, exposing models that the product is bases on. If your time is limited, we have a short version below.
ChatGPT is trained on something called Reinforcement Learning from Human Feedback. It’s a method where we collect demonstration and comparison data. Then, through training reward model, the AI optimizes policy against a reward model. It does so by assisting the reinforcement learning algorithm.
Also, OpenAI has its older AI model, called InstructGPT, to utilize learning methods for ChatGPT training. Differences boils down to variations in the setup for collecting data.
ChatGPT is not perfect
Not by a long shot. It’s more advanced than competition and even “creative” but it’s all based on human work and training models. Ergo, it’s still and only a tool, not an ominous Greek oracle.
Here are some problems with this chat:
• It can still give you incorrect answers. Training model has limitations and it needs a source of truth. The thing is, this source is not ominous as well. It’s not God and it doesn’t know everything. Therefore, it can and will generate answers based on its “experience” and point of view. We can’t expect it to be always right, just as humans are not always correct.
• It can easily be fooled. There is a conversation on Facebook where a user talks to this chat for several minutes. The initial question was: how much is it 2+2? The chat answered 4, yet the final answer was 5. How is this possible? The user in question talked to the chat for about 10 minutes and convinced him, step by step explaining his logic, that it boils down to a number of words in a sentence that describes the equation, not the math itself. The chat then apologized for the wrong conclusion and hoped it didn’t created problems for a human being.
• It can reject entire phrases. It’s because some trained data is biased and the model itself is over-optimized. If you want a human being to better at what it does, you don’t “optimize” him or her in reeducation camps but let them make mistakes and draw conclusions. If you want them to always give you correct information, you will get a tool that “forgets” portions of information it was feeded and serve you information you expect instead. It’s self-censorship in a way.
• Its conversation model is artificial, not natural. It’s similar to the unclear question typed into Google. If a person types poorly prepared phrase, Google will guess the response based on similar phrases used in the past and known words in its dictionary. It will not ask a human being for clarity and to form a clean sentence. Yes, it’s kind of what we expect from automation tools, but then again… “launch nuclear codes” is pretty similar to “launch nuclear rodes”. One can be used to eject burned out parts of a nuclear part, the other can be used to start a final war. It’s a metaphor and not definitely not perfect one, but you get the idea.
ChatGPT and software development
ChatGPT can be a helpful tool while transferring knowledge from one programming language to another.
It can also act as a machine learning tool, similarily to ones used in not-so-distant future, like GitHub Copilot. Understood as assistance, it can flag potential bugs in the code. Additionally, it can formulate responses and generate conversations linked to flagged problems. That’s why software development professionals can use ChatGPT and pair it with IDEs and compilers for write a human-level, readable code without the need to write machine code.
Next, ChatGPT can be used a source of truth for junior developers. It’s trained on databases that include WebText2, Python, CommonCrawl, HTML, JavaScript and CSS. It can be a research assistant for developers who need quick answers regarding a specific aspect of a given programming language.
It can also serve as an invaluable tool to for backend. No, ChatGPT will not be used to generate backend code, it will be backend itself. How is that possible? You can generate a prototype of backend API with no-code tools. Then, a backend will use ChatGPT as an engine for the application. The API will take text and return hashtags adjusted to the text.
What can a future bring? It’s openly discussed that ChatGPT in particular, but also similar technologies, will soon cover things like automating QA and tests units, analyzing code to suggest best security practices. It will also be capable of generating test cases based on given parameters.
Chats will also be used for documentation. Not writing it but correcting. User manuals, release notes, troubleshooting guides, etc. AI can also be leveraged for extracting information like usage examples, and variables like names, code functions, etc.
Will it replace developers?
Or any other specialist, for that matter? No. There are low-code and no code platforms and none of them had risen to the occasion. None of them are good enough to replace even a single seasoned developer. Why is that?
Because software development is way more than a sum of elements. Software development is even more than pure knowledge and ability to produce code. Given time, even a monkey can do it or even write a novel. That’s not the point. It’s not even about the context, either.
To create a proper digital product, you need knowledge from areas like business requirements (and ways to meet them), quality, delivery speed, security, compliance, maintenance methods, performance and more. You don’t buy these things in an asset shop somewhere in the wild. This takes years of expertise, and performing projects that prove one’s skills.
Yes, the code can be generated by tools like ChatGPT and products that will spawn in the near future. None of them will have capability to produce a code that is:
• an error-free
• with comments understandable to team mates and future developers
• well-documented
• optimized for large scale production environment
To continue a tradition using metaphors: you can ask a child to draw a car. Yes, it’s a car but can a four-year-old replicate the quality of the drawing made by a professional graphic designer? Better yet, create an innovation worthy of a concept-car specialist?
That’s how we should see AI, at least currently. Artificial intelligence is everywhere. In recent years, it made its way to graphic cards for video games enthusiasts, even in fridges. It can order you a gallon of milk, if the fridge is running out of it. It can generate a picture frame taken from the previous one and the one that comes next in a sequence.
The technology is called DLSS in NVIDIA and FSR in AMD. It still works within a context, generating frames out of thin air, based on data. Video game creators are not going anywhere, anytime soon.
Summary
Artificial intelligence, ChatGPT included, can be a fun thing to work with. It can even release a portion of your time when doing a mundane work. It still requires human supervision; it still requires manual labor to iron out what it gives. Choosing dedicated software engineers is still a better option than choosing artificial tools for code. Even the most dedicated one.
AI, GPT-3 included, is pre-trained and do not keep learning. It can’t explain and interpret outputs. We also deal with small input size, which can be limiting for at least some applications. If you’re looking for a tangible proof, take a look at this article on Medium.
If you want to talk with a real human about meeting your business needs, contact us. We can discuss the best software development recruitment tools we use to get the best possible specialists for your project.