When I was in graduate school, I had a friend named Nirmal. Nirmal was a Computer Science student who was working on a project that was revolutionary at the time. He was doing something called “image processing” of pictures of melanomas to determine how much they had grown or changed over time. The goal was to allow a dermatologist to upload pictures to a website and get highly accurate results in a few hours about how the melanomas had changed. This was no small feat. To do the analysis, Nirmal had to use a mathematical technique called neural networks, now known as Deep Learning. In concept, neural networks are pretty simple. However, the number of computations that have to be performed to analyze a high-resolution image is staggering. Fortunately for Nirmal (and me for my project), our university had something called a “High Performance Computing Cluster”. It was 97 dual processor servers (I think each processor had 2 cores), each with a whopping 2 GB of RAM, that could run in parallel. Training the neural network took about 6 days. Once it was trained, he could submit a job with a couple of images and it would be broken up and processed by multiple servers at once, and the results sent to the dermatologist. In theory, there could be 388 cores doing the processing at one time using 194 GB of RAM. For us in 2006, this was hard to comprehend.
Fast forward to 2018. Now the same training could be done on just 7 servers. Yes, 7. Imagine what Nirmal could have done with 97 of these servers. There is also another huge technology innovation that has transformed deep learning – the Graphics Processing Unit, or GPU. The newest GPUs have over 5,000 cores on one GPU. That’s almost 13x the processing power of the entire HPC cluster on a single GPU. Nirmal could have blown our minds with that kind of processing power.
Today, GPUs, Deep Learning, Artificial Intelligence, and Machine Learning are becoming a standard part of company’s data processing divisions. This is introducing something known as “Digital Transformation.” Let me give you an example. I am currently working on a project where I am doing analysis of very “messy” text found on images. Think of it as OCR (Optical Character Recognition) for hard to read text. To do this, I use thousands of images of hard to read text to “train” my Deep Learning network. In one of the training runs, I am using a dataset of 7,000 images to be able to classify other letters. The training takes just 8 minutes using a server with 2 GPUs. It would have taken Nirmal forever to do this training just 12 years ago. Honestly, it would not have been feasible. As you can see, the possibilities for what can be done today is staggering.
Deep Learning falls under the larger umbrella of what is known as Artificial Intelligence (AI). At one time, AI conjured up images of robots taking over the world, kind of like in the Terminator movies. Nowadays, it is used to describe the process of making computers “think.” By “think” I mean be able to make decisions based on statistical inference. Self-driving cars, Google Maps, Facebook putting boxes around faces in pictures then telling you who it thinks the person is, detecting fraud in credit card transactions in just milliseconds, your car beeping at you when you cross into another lane without having your blinker on. All of these are possible now due to AI and massive computing power. Business, and even our daily lives, have been “digitally transformed.” This would not have been possible even 12 years ago.
Where is all this headed? I think the possibilities are endless. In another 12 years, what new technology will we have? Will we have stoplights that are capable of learning traffic patterns in such a way that traffic on surface streets is almost alleviated? Will doctors be able to enter lab results or chart notes and see the best course of treatment based on statistical evidence from millions of similar cases? Will MRI results be evaluated by computer vision algorithms and be able to identify cancerous lesions with the same or better accuracy as a highly trained radiologist, but be able to do it in just seconds? All of these are things that are being worked on. They were unfathomable when I was in graduate school, but now have the real of becoming reality. “Digital transformation”, coming to a cell phone, hospital, or car near you. Are you ready?