John Pace
  • Home
  • About
  • Contact

​​
​
Data Scientist, husband, father of 3 great daughters, 5x Ironman triathlon finisher, just a normal guy who spent a lot of time in school.
Let’s explore data science, artificial intelligence, machine learning, and other topics together.

Review of Day 2 of the "Deep Learning in Health Care" Conference in Boston on May 23-24

5/29/2019

1 Comment

 
Picture
In this post, I am going to share summaries and personal commentaries of some, but not all, of the talks I heard on Day 2 of the Deep Learning in Healthcare conference hosted by Rework on May 23-24, 2019, in Boston.  They are not listed in the order in which the sessions were delivered.

Some of the major overarching topics discussed were natural language processing (NLP) in medical records (as with Day 1) and development of apps used in healthcare.

Panel Discussion – Anthony Chang, MD, Pediatric Cardiologist, CHOC Children’s Hospital, Hye Sun Na, AI Product Manager, GE Healthcare, Vijaya Kolachalama, Assistant Professor, Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine
Picture
Dr. Anthony Chang (left), Dr. Hye Sun Na (middle), Dr. Vijaya Kolachalama (right)

Dr. Chang discussed how fast medical knowledge doubles, i.e., how often the number of new procedures, the number of new drugs, and the number of new research articles.  He said it doubles every 2 months! Because of this, physicians are unable to keep up with their own field, much less other related fields.  He stated very clearly that physicians need artificial intelligence (AI) to help guide them with precision medicine and treatment plans.  He said it would be nice to have a report at the end of the medical record that has discovered links and correlations between symptoms, drug interactions, and other things.  A physician cannot read a 200+ page medical record every time they see a patient.  AI can do that and help guide treatment.  For more details see the my “Most impactful quotes from the Deep Learning in Healthcare conference in Boston on May 23 and 24, 2019” post for more details.

​Dr. Chang also made some other striking comments.  He stated “Physicians typing while talking to a patient is ‘criminal.’  A physician should be looking the patient in the eye.”  He said a physician cannot fully focus on what the patient is saying while they are typing.  He stated that useful AI could help avoid the physician having to type, thereby improving the visit.

Finally, Dr. Chang said that maybe 10-15% of AI projects are not useful or clinically relevant.  He stated very strongly that there must be absolute collaboration between clinicians and the group doing the AI project. ​

A Multi-Modal Inferential System for Differential Diagnosis – Ajit Narayanan, CTO, mfine
PictureDr. Ajit Narayanan, CTO, mfine

Dr. Narayanan’s talk was moving.  In it, he presented statistics on the dire state of healthcare in India.  I truly had no idea how bad the situation is there (see my blog post on Impactful Quotes from the conference for me details).  He presented 2 mobile apps his company has developed that allow patients to see doctors virtually.  This allows people who would not typically have access to healthcare to get healthcare.  The app keeps track of the medical record and helps with diagnosis.  The app is providing an amazing service to a very underserved country.


Multilingual NLP for Clinical Text:  Impacting Healthcare with Big Data, Globally” – Droice Labs

The presenters showed striking evidence that NLP software is highly biased towards the English language.  They presented a slide that there are more UMLS concepts for the English language than for all other languages combined!  They have developed a software called “Flamingo” that performs NLP on patient records in languages other than English.  They are helping with a critical need!

Redefining the IVF Experience for Patients & Providers Using AI – Mylene Yao, CEO & Founder, Univfy
PictureDr. Mylene Yao, CEO & Founder, Univfy

Dr. Yao, a gynecologist who specializes in in-vitro fertilization, discussed AI software her company has developed.  IVF is an expensive process that is taxing both mentally and physically on couples.  Predicting the success of IVF has historically been very challenging.  Age have always been known to be a strong predictor of success, but in reality it only accounts for 50% of predictive ability.  Dr. Yao’s company has developed a model that uses many factors to create a model that gives a probability of IVF being successful.  The results are strikingly accurate.  They actually allow physicians to offer refunds to patients who have a certain threshold probability of conceiving if they do not conceive after 3 IVF treatments.  This allows couples and doctors to have a much better idea of the chances of conception and if IVF is really the right.  This talk was a very good example of how AI is solving a real-world healthcare challenge.


Deploying AI in the Clinic:  Thinking about the Box – Mark Gooding, Mirada
PictureDr. Mark Gooding, Mirada

Mark Gooding is a character.  He was uber active on the conference’s app, quite funny, and a snazzy dresser.  Plus, he was a good speaker.  He discussed software his company has developed that helps with finding the contours of medical images – DLC Expert.  The thing that most stuck out about his talk was a statistic.  He said that 70% of clinical institutions have some sort of auto-labeling/contouring software in place.  However, of those institutions, 50% of them do not use it.  Thus, products have been developed, sold, and implemented but are not being used.  This highlights a point brought up in other talks.  No matter how good a product is from an AI perspective, if it is not clinically relevant and easy to use, it will not be used and not benefit anyone.  That has to stay front and center.


Using AI to Solve Diabetes & Diabetic Retinopathy – Stephen Odaibo, CEO, Founder, & Chief Software Architect, RETINA-AI Health
PictureDr. Stephen Odaibo, CEO, Founder, & Chief Software Architect, RETINA-AI Health

Dr. Odaibo, an ophthalmologist who specializes in the retina, discussed the different problems that can occur to the retina due to diabetes.  His slides were fascinating, and he showed a video clip of an actual procedure where he did an injection into a patient’s eye.  It was a little unsettling but got the point across that the complications to the retina due to diabetes are quite serious.  His company has developed an app called “Retina-AI” which can accurately diagnose certain diseases of the retina.  The model was built on thousands of images that were all expertly labeled by retina specialists.  The software is in widespread use, with many users being optometrists.  Using the software allows non-experts to have the equivalent of an expert performing the diagnosis and allowing for proper treatment to be given.  This talk showed yet another real-world example of how AI is being used clinically to improve patient health.


Clinical Consideration in Implementing AI Solutions for Healthcare – Sujay Kakarmath, Physician-Scientist, Partners Healthcare, Pivot Labs
PictureDr. Sujay Kakarmath, Physician-Scientist

Dr. Kakarmath’s talk took a different approach than the other talks did.  He addressed the potential implementation of AI solutions and the possible pitfalls.  The project he discussed was aimed at predicting readmission of heart failure patients.  Readmission is a serious consideration for healthcare providers. First, it is additional hospitalization and treatment for a patient who has recently undergone inpatient treatment for the same condition.  Second, healthcare providers are penalized if patients are readmitted for the same problem within 30 days.  Thus, there are significant implications for both patients and providers.  In their project, they were able to achieve an AUC of 0.7 for predicting readmission of patients at one hospital.  However, when they applied the model to patients from 5 other Boston hospitals, they only achieved an AUC of 0.57.  This is not much better than random guessing as to whether or not someone will be readmitted.  This underscored the point that generalizability of models across institutions is a significant challenge.  He gave 3 takeaways that should be given careful consideration in any project. 


  • First, as had been said by other speakers such as Dr. Jordan Smoller on Day 1, technical performance of an AI solution does not correspond to utility.  It must be evaluated in the context of its intended use.
  • Second, performance of an AI solution is not set in stone.  Continuous assessment in different settings as well as at different times within the same setting may have to become the norm in a high-stakes environment such as healthcare.
  • Third, impact analysis of an AI solution should include critical evaluation of the cost of acting on false positive and false negative errors on clinical as well as economic outcomes.

​We would all be well served to heed these takeaways.

AI Assisted Radiology Image Quality Assessment – Hye Sun Na, AI Product Manager, GE Healthcare
Picture

Dr. Na discussed a project where they built a classifier for chest x-rays.  She said that a common problem in healthcare is incorrect x-rays.  A physician may order a frontal chest x-ray, but the x-ray presented to the radiologist is not a frontal chest x-ray or the x-ray is not useful due to improper positioning.  In these cases, the radiologist looks at the x-ray, notices the error, then has to have the x-ray redone.  This is non-productive time for the radiologist, requires that the patient have the x-ray retaken, thereby adding additional radiation exposure, and increasing the time the patient is in the hospital or clinic.  Together, these add up to additional costs and risks.  Dr. Na’s team developed a classifier that can correctly determine if (1) the x-ray is actually a frontal chest x-ray, and (2) if the positioning is correct.  For scenario 1, they created a one-class classifier using a variational autoencoder (VAE) and convolutional neural networks (CNN).  In scenario 2, they created a binary classifier using a CNN.  Their results were highly accurate.  They are currently working to implement their models into medical imaging devices.  Integration in the device could solve the problems presented above.  When an x-ray that is supposed to be a frontal chest x-ray is taken, the radiographer will immediately know if it is correct.  The patient will not have to have the x-ray taken again at a later time and the radiologist will be presented with the correct x-ray the first time.

If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn and LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/

#nlp #naturallanguageprocessing #artificialintelligence #ai #machinelearning #deeplearning #aiinhealthcare #aiforgood #AUC #medicalrecords #healthcare #IVF #radiology #medicalimaging #opthalmology #retina #diabetes #cardiology #optometry #VAE #variationalautoencoder #cnn #convolutionalneuralnetwork

(All images of speakers were taken from their LinkedIn pages)
1 Comment

Review of Day 1 of the "Deep Learning in Health Care" conference in Boston on May 23-24

5/29/2019

0 Comments

 
Picture
In this post, I am going to summaries and personal commentaries of some, but not all, of the talks I heard on Day 1 of the Deep Learning in Healthcare conference hosted by Rework on May 23-24, 2019, in Boston.  They are not listed in the order in which the sessions were delivered.

Some of the major overarching topics discussed were natural language processing (NLP) in medical records, development of algorithms, and recurrent neural networks.
PictureDr. Akane Sano, Rice University (left) and Dr. Jordan Smoller, Psychiatrist, Harvard Medical School and Massachusetts General Hospital (right)

Panel Discussion:  The Impacts of Machine Learning in Mental Health Care – Akane Sano, Rice University, Jordan Smoller, Psychiatrist, Harvard Medical School and Massachusetts General Hospital


This may have been the highlight of the day.  I’ll just mention some of the major points Dr. Smoller brought up and discussed.

  • First, psychiatric illness is the leading cause of disability worldwide.  I had no idea!  This is obviously an area that needs tremendous research!!! 
  • Second, there is no validated means to predict suicide in youths even though many see a clinician within 1-2 months of committing suicide. 
  • Third, it is critical for anyone working on AI projects in healthcare to work in conjunction with physicians.  A product that is fantastic from an AI perspective (great algorithm or model) can be useless in the clinical setting for which it was developed if domain knowledge from physicians was not integrated. 
  • Fourth, he said they have developed an app that works in their institution and has been shown to work in other institutions.  This is a major accomplishment because models are often not generalizable between institutions.  However, he said he would like to see it go through actual clinical trials before it is put into use.  His justification is that if it is going to be used by physicians, it should undergo the same testing as any treatment method would. 
  • Fifth, there is currently no way to predict which medications to give a patient.  This is important because new medications are constantly being developed and it is not possible for a physician to keep up with all of them, the interactions they may have with other drugs, or the way they may affect other symptoms the patient is experience.  This same thing was said by Dr. Anthony Chang during the Panel Discussion on the 2nd day of the conference.

I asked 2 questions.  The first was about if there were any predictors for suicide that were found in their model that were surprising to him, things they would have never thought of.  He said there were two.  The first was “alveolitis of the jaw”, an inflammation of the jaw.  The second were certain types of orthopedic encounters.  He said these encounters often occurred as a result of self-harm by the patient but were reported only as orthopedic encounters, not as self-harm, and thus were not classified as psychiatric care.

My second question was about if they were able to use socioeconomic data in conjunction with the medical record.  He said in some cases they were, but that it would be tremendously helpful if it could be used.

Learning How the Genome Folds in 3D – Neva Durand, Aiden Lab, Baylor College of Medicine
PictureDr. Neva Durand, Aiden Lab, Baylor College of Medicine

I was particularly excited about this talk because the Aiden Lab is world renown in the field of biology.  This lab invented the Hi-C sequencing method among other things.  Dr. Duran discussed how they were trying to determine the location of enhancers and promoters along chromosomes.  These can be difficult to find since they may be located great distances from each other yet work in concert.  She described this as “spooky action at a distance.”  I like that term.  When enhancers and promoters interact, they bind together and form a looped DNA structure.  By using deep learning to create “contact maps” of regions along the chromosomes, they were able to locate potential enhancers and promoters.  This is something I want to investigate further.


Applying AI in Early Clinical Development of New Drugs – James Cai, Head of Data Science, Roche Innovation Center New York
PictureJames Cai, Head of Data Science, Roche Innovation Center New York

This was the first talk that made me think of new use cases that could be developed for our customers.  Dr. Cai discussed a phone app they developed that can identify standing, sitting, walking, and other movements.  This is being used to give quantitative measurements of the activity of Parkinson’s Disease patients to see how their disease is progressing over time.  He said this is critical because self-reporting of symptoms by patients is often inconsistent, vague, and lacks details.  Next, he discussed how they are using smart watches to measure movement in schizophrenia patients.  His hypothesis is that you can measure patient motivation and the levels and duration of depressive events by analyzing the amount of time patients are laying down or sleeping versus standing or moving.  This can help physicians monitor the effectiveness of the medications the patient is taking without having to rely solely on self-reported data.


The “Why” Behind Barriers to Better Health Behaviors – Ella Fejer from the UK Science & Innovation Network
PictureElla Fejer from the UK Science & Innovation Network

Dr. Fejer mentioned that our AI healthcare projects should “Be Inspirational.”  I found this this statement to be challenging.  So often we seem focused on demonstrating that something can be done.  What if we focus on not just proving that it can be done, but can be done in a way that is “inspirational” to the clients and patients.  What does “inspirational” mean?  I think, in this context, it means that it should get people excited about what can be accomplished and how it can impact their lives for the better.


Application of a Deep Learning Model for Clinical Optimization and Population Health Management – Janos Perge, Principal Data Scientist, Clinical Products Team, CVS
PictureJanos Perge, Principal Data Scientist, Clinical Products Team, CVS

Dr. Perge said that CVS owns Aetna, one of the largest insurance companies in the US.  This gives them access to a wealth of medical data and records.  I did not know CVS owned Aetna.  His team has worked on 2 real-world applications.  The first is to develop a low back surgery model to predict the risk of future back surgery on three time intervals.  The second is to develop kidney failure models to predict the future risk of a member having chronic kidney failure and needing dialysis on different time intervals.  Both projects have the potential to greatly lower costs since they can help physicians develop preventative treatment plans, thus avoiding major medical issues in the future.  They developed hybrid models that consisted of LSTMs that utilized sequential ICD9 codes and a logistic layer to incorporate static features.  One drawback of neural networks is the inability to tell why or how a model gives the results it does.  This lack of being able to explain the results makes deep learning appear as a “Black Box.”  To get around this, they added an Attention Layer near the top of their model that signaled relevant events.  Dr. Perge did not go into much detail on this and I wish he had.  They also added a Logistic Layer at the top of the model to incorporate static features from domain knowledge.  They found that by using deep learning with transfer learning which included both sequential features and all static features, they were able to achieve the highest AUC.


How Chick-fil-A uses AI to Spot Food Safety Trends in Social Media – Davis Addy, Sr. Principal IT Leader, Food Safety & Product Quality
PictureDavis Addy, Sr. Principal IT Leader, Food Safety & Product Quality

This was the only talk I attended that was not in the Healthcare track.  I was interested in the talk because I wanted to see a real-world application of social media analysis.  Their goal is simple:  Use social media, such as Twitter and Yelp, to help spot potential food safety related issues.  Their entire pipeline is run on AWS.  They purchase social media data from a 3rd party company.  This is analyzed using Amazon’s Comprehend sentiment analysis software.  There are a couple of intermediate Python scripts they have written in-house.  The results appear on a dashboard that both the restaurant and the corporate office can see.  If a comment indicates a problem, the restaurant can respond as necessary.  He discussed in detail the challenges of sentiment analysis and the steps they have had to do to make it more accurate.  For example, take the word “sick.”  One review might say “I ate a chicken sandwich and it made me sick.”  This should be a negative sentiment.  However, a teenager may write “Chick-fil-A makes a sick chicken sandwich.”  In this case, the reviewer is using “sick” in a positive manner and this should get a positive sentiment.  This was just one example he gave.  There was a significant list of challenges they had to face.  He said all of their code is being put on GitHub (github.com/chick-fil-a).  I thought that was a nice touch.


Natural Language Processing for Healthcare – Amir Tahmasebi, Director of Machine Learning and AI, Codametrix
PictureAmir Tahmasebi, Director of Machine Learning and AI, Codametrix

This was the first of many talks on natural language processing (NLP) in healthcare.  Dr. Tahmasebi did a very good job of explaining some of the major challenges of using NLP with health records.  These include:  data size, data source/format, data structure, longitudinal data, clinical text and language complexity, language ambiguity, experience-driven domain knowledge and practice.  Each of these is a hurdle that must be overcome in order to perform effective NLP.  When discussing their deep learning strategy, he mentioned 2 things I had never heard of before.  The first was ELMO, or Embeddings from Language Models, and the second was BERT, or Bidirectional Encoder Representations from Transformers.  I’m not an NLP expert by any means so I’m going to spend some time looking into them.  Thankfully he cited the papers that initially described these.


Deep Learning for the Assessment of Knee Pain – Vijaya Kolachalama, Assistant Professor, Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine
PictureVijaya Kolachalama, Assistant Professor, Department of Medicine, Boston University School of Medicine

Dr. Kolachalama’s talk was particularly fascinating because he attempted to demonstrate how the location and severity pain, which is subjective, can actually be pinpointed by using deep learning.  In his work, he looked at knee pain.  His deep learning model used a convolutional Siamese network which uses the same set of weights while working in tandem on two different input vectors to compute comparable output vectors, outputting a heatmap that showed the predicted are of pain.  During the QA, Dr. Tahmasebi (see summary above), asked if the network wasn’t just finding areas of abnormality in the knee.  Dr. Kolachalama said this was the case, but that regions of abnormality are typically the source of pain so the model helped discover both the location of pain and any abnormalities.  I asked if they had tried to apply the model to referred pain, such as when reported pain in the knee is actually caused by a pinched sciatic nerve in the hip.  In this case, an area of abnormality would not be found in the knee.  He said this is something they are looking into.  I think there is potential there because finding no abnormality in the knee could direct the physician to look for other sources of the pain.

Distributed Tensorflow – Scaling Your Model Training – Neil Tenenholtz, Director of Machine Learning at the MGH and BWH Center for Clinical Data Science
PictureNeil Tenenholtz, Director of Machine Learning at the MGH and BWH Center for Clinical Data Science

This was a nice technical talk on how to employ multiple GPUs both within servers as well as across servers for both data parallel and model parallel training.  He mentioned Horovod which allows GPUs from multiple servers to be used via MPI.  I have recently spent some time studying Horovod and this helped augment my knowledge.

If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn and LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/

#nlp #naturallanguageprocessing #artificialintelligence #ai #machinelearning #deeplearning #aiinhealthcare #aiforgood #psychiatry #harvardmedicalschool #massgeneral #mgh #aidenlab #bcm #baylorcollegeofmedicine #parkinsonsdisease #genomics #lstm #AUC #chickfila #aws #amazon #elmo #bostonuniversity #horovod #bert #elmo

(All images of speakers were taken from their LinkedIn pages)
0 Comments

Most impactful quotes from the Deep Learning in Healthcare conference in Boston on May 23 and 24, 2019

5/25/2019

0 Comments

 
Picture

I attended the “Deep Learning in Healthcare” conference put on by Rework in Boston on Thursday, May 23 and Friday, May 24.  I can say it was one of the best conferences I have ever attended.  The topics were relevant, challenging, innovative, and thought provoking. The sessions were only 15-20 minutes long.  This allowed for more speakers to share and for the presentations to be succinct.  Presenters stayed around the conference after their talks to answer questions and discuss further.  The moderator kept the conference on time so all the speakers had their full amount of time.  Overall, it was well run and exciting. 

There were a couple of quotes from speakers that stood out to me and summed up the current state of Deep Learning and AI in healthcare.  In this post, I am going to discuss these quotes.  I will give details of the other talks in a follow-up post.
PictureAjay Narayanan, CTO of mfine

Quote 1:  Ajit Narayanan, CTO of mfine, a company that developed a mobile app to help physicians and patients in India said, “There is approximately 1 physician per 5,500 people in India.”  That’s the equivalent of having only 479 physicians for the entire population of Dallas county.  In Dallas county alone, there were 5,924 physicians in 2015.  The disparity is staggering.  This statistic stood out to me because it demonstrated the amazing need that AI can potentially help with, not only in the US, but around the world.  Physicians can be more accessible and more productive, thereby increasing the health of people who may not otherwise get it.  Using AI to improve the world should be a significant goal of any project.

PictureAnthony Chang, MD, MBA, MPH, MS, Pediatric Cardiologist, Founder of Artificial Intelligence in Medicine (AIMed)

Quote 2:  In a panel discussion titled “The Future of Healthcare – What Can We Expect?”, Dr. Anthony Chang, a pediatric cardiologist stated, “Computer vision in healthcare is easy overall and can be considered the low hanging fruit.  The future needs to be in helping physicians individualize health plans and deliver precision medicine.  It should use cognitive architectures and intelligence to assist physicians with decision making.”  He continued and made the points that it is impossible for a physician to read a 200+ page patient medical record in the few minutes the physician has with the patient.  He wants to see AI that can analyze the medical record and give direction on the prescribed treatment.  Is there a certain condition the patient had in the past that is correlated with a condition the patient is likely to develop or that exacerbates the condition they are seeing the physician for?  AI can take all the information in the patient record and analyze it quickly and in the context of the entire patient history as well as in the context of other patients with similar conditions.  The physician could be presented with this information in a succinct manner and could use it to make more accurate diagnoses and more targeted treatment plans.  I found this statement fascinating.  Many, many talks at conferences are on using computer vision in healthcare, particularly in radiology and pathology.  In those specialties, computer vision is critical.  However, in many specialties, such as cardiology, it is not, and in psychiatry it is irrelevant.  I have heard other physicians at conferences say similar things.  When I asked what problem he would most like to see solved by AI, Dr. Daniel Rubin, a radiologist at Stanford University, said he would like a strong solution that predicts if patients will cancel their appointments.  Cancelled appointments result in unproductive time for physicians and keep other patients from being seen because the time slot is designated as filled.  Other physicians I have questioned have mentioned finding interactions between drugs and specific conditions that are not currently known but that can found by analysis of large numbers of patients.  I think in some ways we may be pursuing the wrong avenues of AI research.  We have to collaborate with physicians and solve problems they face in their practices daily.  We, as non-physicians, can make assumptions about what will be helpful, but it may not be.  AI in healthcare projects should be laser focused and done with strong collaboration with physicians or the project may end up being irrelevant or not useful in the end.


If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn and LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/

​#artificialintelligence #ai #machinelearning #ml #aiforgood #rework #deeplearning #medicine #healthcare #aiinhealthcare #cardiology #mfine #aimed #stanford
0 Comments

    Archives

    December 2020
    November 2020
    September 2020
    August 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    May 2019
    April 2019
    March 2019
    April 2018
    March 2018
    January 2018
    November 2017

    Tweets by pacejohn
Proudly powered by Weebly
  • Home
  • About
  • Contact