A logo showing a blue circle
Vlad-Stefan Harbuz

The Epistemic Implications of AI Assistants

Lately, AI assistants based on large language models, such as OpenAI’s ChatGPT, have caused considerable excitement. The general idea is that you supply a problem, such as “here’s my Javascript code, why doesn’t it work?”, or “write two paragraphs about the political views of Bertrand Russell”, and the assistant will happily supply you with a solution. There’s good reason for excitement — these models are technically impressive, and they can certainly help us accomplish certain tasks more easily.

However, I would like to argue that how we use these models is crucially important. There are specific kinds of problems that such assistants can help us solve, and certain other kinds of problems where they actually cause harm. This harm is of two different kinds. First of all, in certain situations the solutions provided by these models have a high likelihood to be faulty, which can harm our work. Secondly, our reliance on these models in these particular situations can harm our epistemic self-development by incentivising us to not acquire knowledge that would actually be necessary and beneficial. I haven’t seen many people draw this line, which I find worrying. I also think that this perspective tempers some of the more overzealous claims regarding the usefulness of these models, such as “programmers will be replaced by ChatGPT!”.

I will use programming as an example, but my point should be just as clear if you know nothing about programming.

Levels of Complexity

Any particular task can require us to understand things at different levels of conceptual complexity. Let’s consider a very rough and purely illustrative sketch of what the different levels of understanding required when solving a programming problem might look like:

  1. Layer 1 (trivial): looking up how to return multiple values from a Go function.
  2. Layer 2 (nuanced): using algorithms that enable your code to be performant.
  3. Layer 3 (complex): choosing the architecture and dependencies that maximise long-term maintainability of your project.

The very important point here is that the problem can exist on a certain level of complexity, while a certain person’s understanding might extend down to a different level of complexity. For example, let’s say I am familiar with Go (layer 1), but know nothing about performance, time complexity or profiling (layer 2). If I am faced with a problem related to some aspect of Go, that’s not an issue, because the problem and my understanding both exist on layer 1.

However, if I’m dealing with a performance problem, I’m now in trouble, because the problem exists at a lower level (layer 2) than my understanding extends to, so I can’t solve it without somehow expanding my knowledge. Of course, I might solve this problem by lucky coincidence, or by blindly implementing what someone suggests on the internet, but this is really not doing me any service: I have not understood why I chose this solution, and if some future issue arises with it I will be clueless as to what to do, because my knowledge will still not extend to a low enough level, perhaps leading to a compounded and much more dramatic problem.

Encyclopedic and Interpretive Tasks

Obviously, the above example is extremely hand-wavy, but I think we can get an important distinction out of it. Many problems can be split into those that are “encyclopedic” and those that are “interpretive”. In academia, this is sometimes described as a distinction between “bookwork” and “practical work”.

Looking up the population of Scotland, or how to return multiple values from a Go function, or how to use a certain programming library, are all encyclopedic tasks. The information is all already out there — we merely need to look things up and glue our findings together, but there is little original thought required on our part.

On the other hand, interpretive tasks do require some interpretation of the information, some original thought, and often some domain experience. If I’m trying to choose which programming language to use for my 3D videogame engine, there are a lot of considerations I have to take into account — there is no universal answer.

Some might object by saying that ChatGPT does actually perform some kind of interpretation, and does not simply spit out information found in its corpus. I will very quickly describe why I do not believe this to be the case.

First of all, the necessary inability of such models to deal with implicit assumptions is quite a severe limitation. If I ask ChatGPT to help me build a web application, it will inevitably make assumptions as to which programming language I might use, how I might structure my code, whether I might need a database, what kind of database I might need, and so on. The only solution is for us to provide increasingly more detailed input so as to make these interpretive considerations explicit, but the more we do this, the more we’re doing the work instead of having the machine learning model do it, so the model isn’t doing much interpretation.

The second reason is an empirical one. Playing around with these models even briefly makes it clear that their abilities for inference is extremely limited, and that no general capacity for logic has yet emerged from looking things up in a corpus.

Vlad: Anne’s husband’s mother lives in France. Anne lives in the same country as her mother-in-law. Where does Anne live?

ChatGPT: It is not possible to determine where Anne lives based on the information provided. In order to answer this question, additional information would be needed, such as the specific country in which Anne’s mother-in-law lives.

Using AI Assistants

My point here should already be obvious. AI assistants merely aggregate and access a large corpus of information, and so they might be useful for encyclopedic tasks, just as a search engine is. I don’t think there is a problem with using these assistants for the same purposes you would use Google, i.e. as a reference for looking up information that needs no interpretation. However, many tasks are interpretive tasks, and one might be misled into using something like ChatGPT to solve such a task.

There are two possible outcomes when this happens, and both are damaging. Firstly, perhaps the AI assistant has given you a solution that “works”, i.e. it solves your immediate problem. However, you don’t know why this solution works, so if a related problem comes up in the future, you will have an even bigger problem than you started out with. You will have also missed out on a great opportunity for learning, and will now be forced to maintain a mysterious machine-generated artefact. In a sense, it’s a bit like guessing — guessing the right solution doesn’t make you a specialist, because your ability does not extend past the specific instance of the specific problem you guessed the answer to.

The second possibility is that the solution appears to work, but actually doesn’t. This should be no surprise — there is no guarantee that something suggested by a machine learning model will not have numerous problems and pitfalls. You nonetheless deploy the solution into production, and everything quickly goes wrong while you are ill-equipped to fix it.

The Epistemic Implications

We have seen some practical bad outcomes of using AI assistants in the wrong situations, but my bigger point is that there are even worse epistemic difficulties: relying on these models is likely to make us feel that it is less necessary for us to develop our knowledge, or at the very least, the urgency of doing so is much reduced.

If you have a black box that gives you what you think is the right answer every time, or even a lot of the time, what is the point in pushing yourself through the demanding process of learning when you can just rely on the box? Furthermore, you’re liable to be overconfident in your abilities if you feel like you have a magic tool to extricate you from any pickle. Basically, we might think: ”what’s the point in putting in a lot of effort to learn if ChatGPT can just solve the problem for me?”. But there is a point; our objectives should be self-development, knowledge and long-term ability, not short-term hacks, especially when, as we’ve seen, the hacks are often nothing more than faulty short-sighted solutions.

This is akin to cheating on a test — you think you’ve solved the problem, but all you’ve done is cause a big issue for your future self who does not have the knowledge you have just pretended to have.

I find this very harmful for self-development, and the harm is magnified when we consider fields that are more important to us personally than programming, such as communication. I’ll use an example to illustrate this.

Communication

I admit that this post was motivated by the following Hacker News comment:

“I write things a lot of times in not the nicest way. Not mean but very blunt and matter of factly.

I’ve started to use GPTChat for important emails and it’s been a huge help. I can’t wait to have it directly integrated into my email app.”

Let us examine this person’s comment through the two lenses we have been using, the practical and the epistemic.

When it comes to the practical, we should take into account that there are different levels of necessary knowledge when writing an email, just as we saw with the programming example. It is trivial to write an email sending last month’s invoice, but slightly more difficult to articulate our disagreement with an argument by being civil yet firm, and even more difficult to take into account our personal knowledge of the person we are talking to to defuse a difficult situation.

It might be fine to have a program generate a trivial email for us, but it seems uniquely foolish to allow a program to construct the tone we use when talking to our co-workers, because our interpersonal relationships are based on many details and nuances only we know, and this is still true when we have difficulties expressing these nuances. I would also argue that most people would not react positively to finding out that a program had been writing the emails of the person they were communicating with all along.

The epistemic considerations are, to me, where an even bigger problem arises. I can sympathise with the commenter’s lack of self-confidence when it comes to communication — I am aware that communication is difficult, and I have written about this. However, if communication is such a vital skill, improving it should be prioritised, especially when one feels one’s skills are lacking. Using a computer program to communicate does not help in the very important task of improving our communication abilities, and instead just glosses over and exacerbates the issue.

One might say that this is an unfair criticism, because I do not know that this person is not also actively working to improve their communication alongside their usage of ChatGPT, so one might not exclude the other. However, I believe that using ChatGPT in this situations is not only orthogonal, but actively harmful to improving one’s communication, because the end result is that one misses out on important opportunities to learn from interpersonal contact, demanding and effort-laden as these opportunities might be.

Using ChatGPT “for important emails” is therefore liable to distort one’s interpersonal interactions and, if anything, worsen one’s communication skills instead of improving them.

Conclusion

The point I’m trying to make is not specific to programming or communication. I have simply used these two examples to illustrate a bigger point: “solving” problems we struggle with by blindly relying on computer-generated solutions we do not understand harms our process of acquiring knowledge.

Using these AI assistants as a search engine for encyclopedic tasks is not likely to cause us any more harm than using Google does. However, many problems are interpretive, and struggling with any problems of this kind is a good signal that we need to improve our knowledge and skills to better navigate the world around us, be that world professional or personal. Using a computer program as a crutch in situations that require our knowledge and personal expression is unlikely to give us good results in the present, and will certainly harm our ability to develop as people in the future.

Lastly, it should be obvious that if ChatGPT is only able to perform encyclopedic and not interpretive tasks, we’re unlikely to lose our jobs to it anytime soon.

Further Reading