In this post, I am going to share summaries and personal commentaries of some, but not all, of the talks I heard on Day 2 of the Deep Learning in Healthcare conference hosted by Rework on May 23-24, 2019, in Boston. They are not listed in the order in which the sessions were delivered.
Some of the major overarching topics discussed were natural language processing (NLP) in medical records (as with Day 1) and development of apps used in healthcare.
Panel Discussion – Anthony Chang, MD, Pediatric Cardiologist, CHOC Children’s Hospital, Hye Sun Na, AI Product Manager, GE Healthcare, Vijaya Kolachalama, Assistant Professor, Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine
Dr. Chang discussed how fast medical knowledge doubles, i.e., how often the number of new procedures, the number of new drugs, and the number of new research articles. He said it doubles every 2 months! Because of this, physicians are unable to keep up with their own field, much less other related fields. He stated very clearly that physicians need artificial intelligence (AI) to help guide them with precision medicine and treatment plans. He said it would be nice to have a report at the end of the medical record that has discovered links and correlations between symptoms, drug interactions, and other things. A physician cannot read a 200+ page medical record every time they see a patient. AI can do that and help guide treatment. For more details see the my “Most impactful quotes from the Deep Learning in Healthcare conference in Boston on May 23 and 24, 2019” post for more details.
Dr. Chang also made some other striking comments. He stated “Physicians typing while talking to a patient is ‘criminal.’ A physician should be looking the patient in the eye.” He said a physician cannot fully focus on what the patient is saying while they are typing. He stated that useful AI could help avoid the physician having to type, thereby improving the visit.
Finally, Dr. Chang said that maybe 10-15% of AI projects are not useful or clinically relevant. He stated very strongly that there must be absolute collaboration between clinicians and the group doing the AI project.
Dr. Narayanan’s talk was moving. In it, he presented statistics on the dire state of healthcare in India. I truly had no idea how bad the situation is there (see my blog post on Impactful Quotes from the conference for me details). He presented 2 mobile apps his company has developed that allow patients to see doctors virtually. This allows people who would not typically have access to healthcare to get healthcare. The app keeps track of the medical record and helps with diagnosis. The app is providing an amazing service to a very underserved country.
Multilingual NLP for Clinical Text: Impacting Healthcare with Big Data, Globally” – Droice Labs
The presenters showed striking evidence that NLP software is highly biased towards the English language. They presented a slide that there are more UMLS concepts for the English language than for all other languages combined! They have developed a software called “Flamingo” that performs NLP on patient records in languages other than English. They are helping with a critical need!
Dr. Yao, a gynecologist who specializes in in-vitro fertilization, discussed AI software her company has developed. IVF is an expensive process that is taxing both mentally and physically on couples. Predicting the success of IVF has historically been very challenging. Age have always been known to be a strong predictor of success, but in reality it only accounts for 50% of predictive ability. Dr. Yao’s company has developed a model that uses many factors to create a model that gives a probability of IVF being successful. The results are strikingly accurate. They actually allow physicians to offer refunds to patients who have a certain threshold probability of conceiving if they do not conceive after 3 IVF treatments. This allows couples and doctors to have a much better idea of the chances of conception and if IVF is really the right. This talk was a very good example of how AI is solving a real-world healthcare challenge.
Mark Gooding is a character. He was uber active on the conference’s app, quite funny, and a snazzy dresser. Plus, he was a good speaker. He discussed software his company has developed that helps with finding the contours of medical images – DLC Expert. The thing that most stuck out about his talk was a statistic. He said that 70% of clinical institutions have some sort of auto-labeling/contouring software in place. However, of those institutions, 50% of them do not use it. Thus, products have been developed, sold, and implemented but are not being used. This highlights a point brought up in other talks. No matter how good a product is from an AI perspective, if it is not clinically relevant and easy to use, it will not be used and not benefit anyone. That has to stay front and center.
Using AI to Solve Diabetes & Diabetic Retinopathy – Stephen Odaibo, CEO, Founder, & Chief Software Architect, RETINA-AI Health
Dr. Odaibo, an ophthalmologist who specializes in the retina, discussed the different problems that can occur to the retina due to diabetes. His slides were fascinating, and he showed a video clip of an actual procedure where he did an injection into a patient’s eye. It was a little unsettling but got the point across that the complications to the retina due to diabetes are quite serious. His company has developed an app called “Retina-AI” which can accurately diagnose certain diseases of the retina. The model was built on thousands of images that were all expertly labeled by retina specialists. The software is in widespread use, with many users being optometrists. Using the software allows non-experts to have the equivalent of an expert performing the diagnosis and allowing for proper treatment to be given. This talk showed yet another real-world example of how AI is being used clinically to improve patient health.
Clinical Consideration in Implementing AI Solutions for Healthcare – Sujay Kakarmath, Physician-Scientist, Partners Healthcare, Pivot Labs
Dr. Kakarmath’s talk took a different approach than the other talks did. He addressed the potential implementation of AI solutions and the possible pitfalls. The project he discussed was aimed at predicting readmission of heart failure patients. Readmission is a serious consideration for healthcare providers. First, it is additional hospitalization and treatment for a patient who has recently undergone inpatient treatment for the same condition. Second, healthcare providers are penalized if patients are readmitted for the same problem within 30 days. Thus, there are significant implications for both patients and providers. In their project, they were able to achieve an AUC of 0.7 for predicting readmission of patients at one hospital. However, when they applied the model to patients from 5 other Boston hospitals, they only achieved an AUC of 0.57. This is not much better than random guessing as to whether or not someone will be readmitted. This underscored the point that generalizability of models across institutions is a significant challenge. He gave 3 takeaways that should be given careful consideration in any project.
We would all be well served to heed these takeaways.
Dr. Na discussed a project where they built a classifier for chest x-rays. She said that a common problem in healthcare is incorrect x-rays. A physician may order a frontal chest x-ray, but the x-ray presented to the radiologist is not a frontal chest x-ray or the x-ray is not useful due to improper positioning. In these cases, the radiologist looks at the x-ray, notices the error, then has to have the x-ray redone. This is non-productive time for the radiologist, requires that the patient have the x-ray retaken, thereby adding additional radiation exposure, and increasing the time the patient is in the hospital or clinic. Together, these add up to additional costs and risks. Dr. Na’s team developed a classifier that can correctly determine if (1) the x-ray is actually a frontal chest x-ray, and (2) if the positioning is correct. For scenario 1, they created a one-class classifier using a variational autoencoder (VAE) and convolutional neural networks (CNN). In scenario 2, they created a binary classifier using a CNN. Their results were highly accurate. They are currently working to implement their models into medical imaging devices. Integration in the device could solve the problems presented above. When an x-ray that is supposed to be a frontal chest x-ray is taken, the radiographer will immediately know if it is correct. The patient will not have to have the x-ray taken again at a later time and the radiologist will be presented with the correct x-ray the first time.
If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn and LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/
#nlp #naturallanguageprocessing #artificialintelligence #ai #machinelearning #deeplearning #aiinhealthcare #aiforgood #AUC #medicalrecords #healthcare #IVF #radiology #medicalimaging #opthalmology #retina #diabetes #cardiology #optometry #VAE #variationalautoencoder #cnn #convolutionalneuralnetwork
(All images of speakers were taken from their LinkedIn pages)