- Get link
- X
- Other Apps
There has been this increasing fear of AI taking over our life, mainly fed by news articles that try to convey the kind of research that is being done at the moment. Now that companies like Facebook or Google are doing major contributions to the field, it is no wonder that news outlets give those events a shot and try to bridge the gap between the sciency jargon being used in the actual papers, and the main outcome in a way that is accessible to the general public. Of course, there's also the more banal aspect of journalism: writing an article that's catchy so people actually go and read it. Those two aspects are the ones that end up curving the authors' perspectives and we end up swimming in a sea of articles that are simply misleading. TLDR: The message gets lost in translation. This time however, we're talking about the translation between concise, objective, peer-reviewed and expert-oriented writing that characterizes a scientific paper, and the washed-down version that journalists are tasked with interpreting.
Two recent examples come to mind: last year, some reports emerged about a particular algorithm that was able to recognize faces in images, even after they have been blurred. Sounds impressive ...and creepy. Not the research itself but the potential applications for governments and stalkers. It turns out that the science behind the paper is not as solid as one may expect. First, the paper doesn't seem to be peer-reviewed (which is one of the hardest hurdles that research has to undergo in order to be considered new knowledge). It has been uploaded to arXiv: a web platform that simply allows anyone to upload documents in different fields of science. There is really nothing else that ensures any kind of scientific rigor in documents that are found in this platform whatsoever. Anyhow, this paper addresses a valid and interesting question nonetheless but the experimental part is mediocre at best. They use two datasets of faces, one of them containing images of only of 40 subjects. Any half-decent research that uses human subjects know that this is hardly representative. But here's what the major flaw of the paper is: in order to identify the blurred subjects, they need to train a classifier which is based on a convolutional neural net (CNN). Now, they have chosen to use samples from every subject as training set and, left a few instances (again from every subject) to test the model's ability to generalize to "unseen" samples. This introduces an enormous bias in the classifier that is we know is pretty good at learning, specially when the need for generalization is low or, in this case, almost non-existent. The features of the "unseen" samples are not that new to the model as the same subjects have been seen already during training. A more realistic and challenging setup would be to leave some subjects' entire set of samples out and test on those. After factoring this issue, together with some other details like the size of the pictures and the amount of blurring that went into the experiments that perform well, the whole story just doesn't seem as appealing as it did at the beginning. It's just another paper in arXiv, no one will remember after a week. No impact. No real contribution. No AI apocalypse.
A second event that went viral recently was an experiment conducted by Facebook on an AI that was designed to negotiate and trade using the english language. The algorithm had the freedom to adapt the language in order to maximize a reward function, which in this case, had its focus on the efficacy of trading and not on the proper use of the language. Once the algorithm started modifying the language in order to express more efficiently the goal it was meant to achieve, the researchers decided to stop it (most likely because they wanted to enforce the proper use of the english language). The press freaked out. On the articles written on this pretty common issue (that is, when the algorithm is not doing what you thought you were programming it for and stop it), the situation sounded like they had to go and pull the power cord of the computer they were using because it was already plotting to take over Wall Street! You know that feeling when your word processor freezes and, out of frustration, you just decide to close it? Well that's pretty much what they did over at Facebook. Nothing else.
These are just two examples but we are getting more and more articles that try to convey the most important contribution of a paper, but end up changing their conclusions into something that fits an apocalyptic (but fun to read) view of such investigations.
Now to the scary part: I'm a computer scientist and I have been a researcher in AI for the last 4 years. It is then easier for me to dig, read and giggle at these kind of things. Then I realized I've read equally extraordinary articles about research in medicine, psychology, physics and so on. The natural question is then: what could be the difference between those articles coming from fields I don't know and the silly ones coming from fields that I do know how to judge? Is there any reason why they should't be as misleading? Ideally, they shouldn't. AI is rather new, there are not a lot of experts in the field (yet) but this still taught me to be more careful with things I read. Specially the spectacular ones.
References:
- Machine learning can identify pixelated faces, researches show - WIRED.
- Machine learning can recognize pixelated faces - Business insider.
- Defeating Image Obfuscation with Deep Learning - ArXiv
- Facebook shutdown AI:
- Get link
- X
- Other Apps
Comments