The Most Meaningful and Life-changing Presentation I Have Ever Seen was at the Nvidia GTC 2019 Conference in Washington, DC
Ah yes, the Nvidia GPU Technology Conference (GTC) – my favorite times of the year! Some of the greatest minds in AI come together to show off the work they are doing on Nvidia GPUs or the AI research they are conducting. For an AI fanatic, it’s a brain overload!
In November, I attend the GTC conference in Washington, DC. Though this conference is significantly smaller than its sister conference in San Jose, it was still packed full of fantastic presentations. The main terms I heard over and over were BERT and PyTorch. BERT is taking the natural language processing (NLP) world by storm and PyTorch is quickly gaining momentum and market share among AI/ML/DL frameworks. In this post, I’m going to highlight one of the sessions that stood out to me. In fact, it was possibly the most meaningful and even life-changing presentation I have ever seen at a conference.
Robert Pless (@rbpless), Department Chair and Patrick & Donna Martin Professor of Computer Science at George Washington University, presented a talk entitled “Explainable Deep Learning to Fight Sex Trafficking.” You can watch the video of the presentation here. In the talk, he discussed a project he is conducting with the FBI and that is currently in use by the National Center for Missing and Exploited Children. The project is very simple and elegant, and is having a major impact in helping to stop or prevent crimes against children.
The goal of the project is to be able to identify, down to the specific hotel room, locations where sex trafficking is occurring. His team has developed an app called “TraffickCam.” With the app, you can take 4 pictures of a hotel room. The pictures are uploaded and stored in an image database. These images can then be compared to images that law enforcement officials gather from the Internet or other sources in their trafficking investigations. By identifying distinctive regions in the images you uploaded and the images in the database, they can identify, down to the specific room, where the photograph was taken. From this information, law enforcement can target their investigation much more specifically. As of the presentation time, there were ~356,000 images from ~32,000 hotels in the database and this number is growing. However, this data set is only able to identify the correct hotel room in 8-35% of the searches. How can this number increase -- more data! I have already added images from 3 hotel rooms since I heard the talk. You can help increase the number, too. Update: As of June 25, 2020, almost 2.9 images from over 255,000 distinct hotels, have been uploaded by users. There are 102,000 iPhone users and 42,000 Android users! What an impact. I have personally uploaded picture from almost 2 dozen hotels.
The image below is taken from Dr. Plesse's presentation. On the left, you see the picture of the hotel room TraffickCam is trying to identify. On the right, you see 2 heat maps, or similarity maps. These maps give a visual representation of the part(s) of the image that the network has found that made it decide the images were similar, and possibly the same. Notice that in this picture, it is the headboard. In other pictures in the presentation, you can see it sometimes recognizes framed photos on the wall or other distinguishing features.
While this topic is quite disturbing, I am incredibly excited to see AI being used in such a helpful and life-changing manner. AI sometimes gets a bad rap, particularly about privacy, but this project is proof that it can change the world for the better. If you haven’t already, please download the app from the App Store or Google Play. It takes less than one minute to take the photos of your hotel room. That one minute can change someone’s life forever. For more information on the project, visit www.traffickcam.com or visit their Facebook page. Be sure to watch the YouTube video for more info on how the project started https://www.youtube.com/watch?v=zhfHOR6yc98 and read the paper at https://arxiv.org/abs/1901.11397.
If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems
#artificialintelligence #ai #machinelearning #ml #convolutionalneuralnetwork #cnn #computervision #neuralnetworks #nvidia #gtc #traffickcam #aiforgood #gwu #georgewashingtonuniversity
On November 8-10th, I competed in the The University of Texas Southwestern Medical School U-Hack Med Hackathon. I attended the event last year and helped multiple groups as a facilitator and mentor, but this year I applied to compete on a team and was accepted! The 48-hour event was a great learning experience and an amazing way to meet new people from varied backgrounds.
Team Members (in order from left to right):
I was on Team 9. Our team’s project was entitled “REAL-TIME CARDIAC ASSESSMENT OF CATHETERIZATION-DERIVED FICK AND CMR-DERIVED FLOW” and focused on using machine learning and AI to reduce the time it takes for children to undergo pediatric heart catherization procedures. For a full description of the project, click here. In the procedure, a wire with a balloon attached to the end is inserted into a large vein near the child’s groin, then guided up through the torso into the heart and surrounding veins and arteries. In the past, this procedure has been done using x-rays to allow the physician to see the wire as it moves. However, this makes an already risky procedure even more dangerous. Studies have shown that children that are exposed to 2-4 hours of x-rays (the amount of time a procedure typically takes) are 3-4 times more likely to develop cancer later in life. More recently, cardiologists have been performing radiation-free heart catheterizations using an MRI to view the wire and balloon. In both procedures, the child has to be put under general anesthesia for the entire time. This much anesthesia carries a high risk of causing developmental and neurological problems. Thus, anything that can be done to reduce the exposure to x-rays and amount of time the child is under anesthesia can significantly reduce potential risks. This was our challenge. Click here for more info on radiation-free heart catheterizations at Children’s Medical Center in Dallas.
Dr. Youssef Arar, a pediatric cardiology fellow at Children’s Medical Center in Dallas was our team lead. Dr. Daniel Castellanos, a pediatric cardiology fellow and colleague of Dr. Arar, was our co-lead. They routinely perform these procedures so they are well acquainted with the intricacies, risks, and ways the procedures can possibly be improved. Dr. Arar presented us with two different tasks that could help reduce the time heart cath procedures take.
The first challenge involved automating calculations of blood flow and pressures. Currently, the physician doing the procedure has to tell a technician what values he is getting. The technician then enters those values into a spreadsheet and has to manually calculate the blood flow and pressure. This is a time consuming and error prone process. Every moment calculations are being performed manually is another moment the child is exposed to radiation or under anesthesia. When the procedure is done in the MRI room, there is a tremendous amount of noise, so communication is impaired significantly. This makes the process even more error prone. Our team developed a web-based application to automate the calculations. To use the app, the values are relayed to the technician, or the technician views them on a screen. As they are input, blood flow, pressure, and other values are automatically calculated, giving the physician immediate answers. Dr. Arar said this app could potentially shorten procedures by 40 minutes and give more accurate calculations!
The second challenge involved visualizing the balloon in the heart during the MRI procedure. An MRI takes an image of a single plane at any given time. If the balloon is not in that plane, the cardiologist cannot see it, so any movement of the balloon is done blindly. This results in the patient having to potentially be repositioned, lengthening the anesthesia time. Dr. Arar asked us to find a way to track the amount of time the balloon is seen during a procedure. By knowing this information, the cardiologists can evaluate the procedure after it is finished to discuss what could have been done differently to keep the balloon in view longer. By determining what adjustments can be made, future procedures can be more efficient, shorter, and safer. Our team used a process called “thresholding” to segment the balloon on the MRI image and count the number of frames it appeared in. This allowed us to calculate the percentage of time the balloon was seen.
Here is a video of the work our team did. You can see the balloon in red as it is tracked and the time it is visible in green at the top left.
Overall, I had a fantastic time. My team was small and we had varied backgrounds. We had a web developer, a computational scientist, a post-doctoral researcher, a PhD student, two MDs, and myself. In the end, our team ended up winning the Lyda Hill Best Overall Team award for our work. Great job everyone!
Ms. Lyda Hill is truly an amazing person. She is a Dallas entrepreneur, philanthropist and one of the few women to make the 2013 Philanthropy list of most generous donors and Forbes’ 2014 list of top 15 entrepreneurs who give back to the community. She leads Lyda Hill Philanthropies and is chairman of LH Capital, Inc., a private investment firm. Through her for-profit and not-for-profit investments, Miss Hill is committed to funding game-changing advances in science and nature, to empowering nonprofit organizations and to improving the local communities of greatest importance to her: Texas and Colorado
If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems.
#artificialintelligence #AI #machinelearning #uhackmed #UTSW #cardiology #pediatrics #lydahill #computervision #thresholding