Takeaways on coding with LLMs
This section offers a few takeaways on coding with LLMs. Models are evolving so fast that these are likely to be wrong within years, months, or maybe even weeks, so take them for what they are!
Education and exposure
However one might feel about AI and its ethical implications (and there are many!), it is important, as citizens and researchers, to be educated on the topic. Education involves learning about the technology, its advantages, benefits, risks, and problems, but also having personal experience with AI tools.
Criticism of these tools is absolutely valid if it comes from an educated and informed place, not when it comes from uninformed prejudices. As these tools evolve (and they are evolving very fast), it is important to remain up to date in our knowledge and understanding.
For non-specialists, the deep learning literature and programming forums might not be the best ways to keep up to date with the evolution of these tools. What has worked best for me is to follow a number of podcasts. In particular, I like Hard Fork from the New Yor Times—a fun and approachable, but well-researched podcast on technology which often deals with AI, as well as podcast episodes from Lawfare dealing with AI for questions related to law and geopolitics. There are many other podcasts on AI but they tend to target a technical audience.
Critical thinking
Be critical and don’t blindly trust answers from LLMs: they will “happily” tell you that the code is doing something while it is not (in the same way that they can give wrong answers to any question). It is crucial to double-check the code.
Be mindful that code can be wrong in ways that are obvious and easy to spot or in ways that are much less obvious. It can even be harmful.
The more Python you know, the easier it will get to be critical of the code. Until you have enough knowledge to be able to understand and assess the code by yourself, don’t hesitate to use LLMs to explain the code to you. While they might not always write the right code to answer particular problems, they are usually excellent at explaining what snippets of code do and how they work.
Constructive usage
Use LLMs as assistants and instructors to brainstorm with and learn from, not as a magic boxes that will do everything for you. They do make coding a lot easier, they are tremendously useful, and they will make you able to write useful Python code a lot sooner, but you still have to think and learn.
Inescapable?
It seems plausible that, as these tools become increasingly helpful and widely adopted, those who refuse to use them will find themselves at a growing disadvantage. In a sense, as with most transformative technologies, we might not really have a choice. For those struggling with the many ethical concerns, it might be more realistic to try to get good regulations and a more equitable system than to reject the technology altogether.