AI Skin Cancer Detection And Object Recognition With 3D Cues
With the increasing popularity and widespread use of neural networks and artificial intelligence, questions have arisen about how they will affect medicine. Medicine has traditionally relied heavily on human expertise and extensive training to diagnose and treat illnesses. One of these areas is dermatology, where doctors visually assess skin anomalies to screen for diseases and skin cancer. Typically, a patient will come in with concerns about an abnormal skin growth or spot, and the dermatologist will assess its shape, symmetry, color, and other qualitative properties to determine if it is harmful or benign. One critical area is skin cancer. Traditionally, there is an acronym that helps patients and doctors decide if someone has skin cancer. It is ABCDE or asymmetry, border, color, diameter, and evolution. By assessing these metrics, a patient can be screened for the presence of Melanoma.
The diagnosis of skin cancer, in many ways, is a form of object detection where a doctor assesses visual properties to determine if skin cancer is present. However, with new advances in deep learning and object recognition on computers, a job once done by dermatologists could be heavily assisted by AI. In the article “AI Is Coming for Skin Cancer Detection” by Caitlin Carlson of The Washington Post, a scientific journal is referenced that documents how a convolutional neural network outperformed 21 board-certified dermatologists in detecting skin cancer (Haenssle et al., 2018; Washington Post, 2025). The article explains that although this is promising, it doesn't consider the dermatologist asking questions and inspecting a potential area with their touch. Furthermore, the article describes products currently used by dermatologists, such as DermaSensor, a handheld device used to detect skin cancer, and Nevisense, which the FDA has approved. These devices have their shortcomings and false positives, but the author notes that the future is promising (Washington Post, 2025).
Training neural networks to perform this task accurately and effectively requires large datasets for model training, as well as intelligent design, considering the differences between object recognition in humans and machines. This distinction is best explored in a recent academic study by Cutler et al. of Loyola University Chicago titled “Beyond the Contour: How 3D Cues Enhance Object Recognition in Humans and Neural Networks, “which explores the process by which the human brain identifies objects vs. the process by which neural networks identify objects and any biases thereof (Cutler & Baumel, 2025). What the researchers discovered was crucial for our understanding of object recognition. They found that while humans were as good as their neural network counterparts at detecting objects, the methodology differed. Humans use volumetric cues and other 3D information around an object to identify (Cavanagh & Leclerc, 1989; Cutler & Baumel, 2025). On the other hand, AI relies heavily on image similarity, which it was exposed to during training. In atypical viewing conditions where the shape can be ambiguous, humans were better at detecting the object when given other 3d cues to aid them (Tarr et al., 1998; Cutler & Baumel, 2025).
On the other hand, AI relies on image similarity. This is crucial for developing new computer models, particularly in anomaly detection within dermatology. Suppose we understand that human doctors, even in cases of ambiguity, can use other information to diagnose skin formation rather than relying solely on comparing it to previous examples. It is then clear why a human doctor is better in complex detection cases. Conversely, we want to minimize false positives as much as possible when using AI. One way to approach this is to train models on images from, as the study's author calls, “non-canonical viewpoints” (Cutler & Baumel, 2025). To improve their accuracy, models should be trained on different skin tones, locations, and non-typical viewpoints.
Despite its current pitfalls, the future is bright for AI assistance in skin cancer detection. That is not to say that human doctors will be excluded from the equation, but rather that AI can be a great addition to the toolbox of a trained dermatologist. Patient interaction and in-person screening are still the standard. However, the future of AI in medicine relies on our understanding of the human brain and its object recognition methods so that we can better train models to assist medical practitioners in the future.
References
Cavanagh, P., & Leclerc, Y. G. (1989). Shape from shadows. Journal of Experimental Psychology: Human Perception and Performance, 15(1), 3–27. https://doi.org/10.1037/0096-1523.15.1.3
Cutler, M., & Baumel, L. D., Tocco, J., Friebel, W., Thiruvathukal, G. K., & Baker, N. (2025). Beyond the contour: How 3D cues enhance object recognition in humans and neural networks. Journal of Vision. (Pre-publication draft).
Haenssle, H. A., Fink, C., Schneiderbauer, R., Toberer, F., Buhl, T., Blum, A., ... & Thomas, L. (2018). Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology, 29(8), 1836-1842. https://doi.org/10.1093/annonc/mdy166
Tarr, M. J., Kersten, D., & Bülthoff, H. H. (1998). Why the visual recognition system might encode the effects of illumination. Vision Research, 38(15-16), 2259–2275. https://doi.org/10.1016/S0042-6989(97)00428-6
Washington Post. (2025, April 7). AI is coming for skin cancer detection. The Washington Post. https://www.washingtonpost.com/wellness/2025/04/07/ai-is-coming-skin-cancer-detection/
No comments:
Post a Comment