AI and the Illusion of Skill
We use signals for things we can’t directly see all the time. Thus, we rely to a significant extent on inferences. For example, an indicator on your car dashboard lighting up allows you to infer that something is wrong with, let’s say, the pressure of your tires. Crucially, inferences are warranted to the extent that the signals stand in a reliable relationship with the states of affairs they are a signal of. If you know that the light on your car dashboard lights up randomly, in other words, if it lights up irrespective of the pressure of your tires, your inference that there is something wrong with your tire pressure is not warranted (in such a case, there is something wrong with your dashboard). More generally, to provide appropriate justification for inferences, signals have to significantly correlate with the states of affairs they are presumably a signal of. (They don’t have to be perfect, though: you might justifiably draw an inference that your car tire pressure needs to be adjusted even if it’s actually fine, based on an indicator that lights up 99% of the time when the tire pressure is low and only rarely when it’s normal. That is, false positives are uncommon, so the inference remains warranted even if, in this instance, it’s incorrect)
We rely on similar inferences and signals when interacting with others, as a parallel process unfolds in interpersonal settings: you make inferences based on signals, and these inferences are justified or not depending on the extent to which they correlate with the state of affairs they are signals of. Someone behaves recklessly; you draw the inference that the person is a reckless individual. These inferences are useful, as in the example of the car, because they help us to more successfully navigate the world: you check your tires when the indicator is on (potentially preventing an accident or damage to your car), and you, as a manager, decide to assign a task that requires high attention to detail to another employee instead of the one you find reckless (you have good evidence that it’s not the fundamental attribution error).
Now we have AI. One problem with AI, as I see it, is that it can significantly distort this inference-making process. A friend of mine, a software developer, told me it’s now quite common for junior engineers to apply for senior roles because they can use AI tools to complete technical assessments. These are typically take-home assignments they submit after a couple of days, and with AI’s help, they can produce work that looks like it came from someone much more experienced. But in reality, it’s all over the place. Can you tell how good a student is based on their assessment? Or how good a designer is based on an art portfolio? Can you tell how good a writer is based on an essay they have written? It seems this is harder than ever: assessments, essays, lines of code, art portfolios, etc. have lost part of their value as signals of skill.
Does this matter? It depends. Let’s consider the following counterfactual, potentially future scenario. A novice using AI can produce outputs of the same quality as an expert with AI, consistently and sustainably. In this hypothetical scenario, AI closes the expertise gap. In a Searle’s Chinese Room fashion, the AI would enable the junior to produce outputs equal in quality to those produced by the expert. For example, a story that could have been written by Cynthia Ozick with just a few prompts that don’t really require any writing expertise. Or a dream-like painting à la Salvador Dalí generated from a few text prompts to DALL E. If this scenario becomes reality, it wouldn’t matter to someone who only cares about the outputs themselves; that is, the ends rather than the means. In such a case, inferences about other people’s skills would become inconsequential. If outputs are all that matter, it doesn’t matter who is the expert and who is the novice, as both can produce outputs of the same quality. This would raise important questions about fairness and wealth distribution, since expertise usually commands a premium.
As things stand, AI seems to have a negative impact by making signals of expertise more difficult to interpret, that is, harder to link to genuine expertise or skill. Strong assessments, essays, or code are, as I said, no longer as easily attributed to truly capable individuals. This matters because AI is not yet fully autonomous. The cliché still holds: “It is a tool.” If non-experts manage to trick the system using AI (for example, a master’s student getting into a program primarily because AI helped them write an insightful research proposal) then, at least for now, quality is expected to decline. This is simply because AI cannot yet fully replace human expertise or skill. Especially in projects that require sustained, long-term effort, a human remains an essential part of the loop.
The negative impact is twofold. First, less qualified individuals may end up doing the work, and unless AI compensates with substantial productivity gains, overall quality or economic output is likely to suffer. Second, there is a deeper ethical concern: rewards may go to those who have not genuinely earned them, thereby weakening the link between effort and rewards. For instance, someone might be paid handsomely for prompting an AI to generate a piece of art while the original artists are excluded. In this way, AI risks undermining both economic outcomes and fairness.