![]() On September 3, 2020, Dr. Camille Ehre, Assistant Professor at the Marsico Lung Institute at the University of North Carolina School of Medicine, published some beautiful and fascinating scanning electron microscope images of the SARS-CoV-2 virus infecting human bronchial epithelial cells. Colored images can be found on Dr. Ehre's faculty page. The work was published in the prestigious New England Journal of Medicine. It's amazing that this seemingly innocuous little particle (in red in the images) could be responsible for so much sickness and death. Thanks for such great work! Links If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems. #coronaravirus #sarscov2 #unc #nejm #marsico #sem #microscopy #lungs #virus #infection #science #medicine #biology #disease
0 Comments
![]() For the last few weeks, my company, Mark III Systems, along with Vanderbilt University Medical Center Department of Radiology, Dell/EMC, and Nvidia, have been hosting a series of webinars about how AI can augment the great work that radiologists are already doing. AI is never going to replace radiologists, but it does have the potential to make their work a little easier so they can focus on the cases that need their full expertise and experience. The goal of the series is to highlight what is available and what is coming to help achieve this goal. I had the privilege of being the first presenter. My presentation was entitled "The Greatest Thing to Happen to Computing in 10 Years." I discussed the role of GPUs in AI and how they have made things possible that were not possible just 10 years ago. I then described specific use cases for AI in Radiology. The link to the presentation is below. Enjoy! If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems. #ai #artificialintelligence #machinelearning #radiology #vumc #gpu #dell #emc #nvidia ![]() This post is part 3 of my 4 part series on Keras Callbacks. A callback is an object that can perform actions at various stages of training and is specified when the model is trained using model.fit(). In today's post, I give a very brief overview of the Tensorboard callback. ![]() If you are reading this post, then chances are that you have heard of Tensorboard. It is a great visualization tool that works with Tensorflow to allow you to visualize what is going on in the background while your job is running. You can see nice graphs of loss and accuracy, as well as visualizations of your neural network. It has other features as well, but I'll only mention these for now. Take a few minutes to explore it. It is worth the time! While there are multiple arguments that the Tensorboard callback can take, the one I want to discuss today is the log_dir. log_dir allows you to specify the directory where the Tensorboard log files are saved. The files in this directory are read by Tensorboard to create the visualizations. This comes in very handy when you are train with different models or with different hyperparameters. Simply specify different directories and you are all set. If the directory does not exist, it will be created when the callback is used. Here is a short snippet of code that creates and uses the callback. Since this particular job is doing linear regression with 100,000 datapoints, I add that to the directory name. The newly created directory looks something like this. logs/linear_regression_100k_datapoints-20200911-130022/ Now we have the Tensorboard log files being written to a very specific directory that clearly tracks the specific training job. This is a great way to keep up with what you have done! If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems. #artificialintelligence #ai #machinelearning #ml #tensorflow #keras #neuralnetworks #deeplearning #tensorboard #hyperparameters #callbacks #visualization I signed up for Ironman Coeur d'Alene, Idaho, a couple of days ago. Ironman #7 - June 27, 2021. Please COVID.....let us get back to normal by then. A family vacation to the great Pacific Northwest will be fun! #ironman #Triathlon
This post is part 2 of my 4 part series on Keras Callbacks. As I stated in part 1, a callback is an object that can perform actions at various stages of training and is specified when the model is trained using model.fit(). They are super helpful and save a lot of lines of code. That's part of the beauty of Keras in general - lots of lines of code saved! Some other callbacks include EarlyStopping, discussed in Part 1 of this series, TensorBoard, and LearningRateScheduler. I'll discuss TensorBoard and LearningRateScheduler in the last 2 parts. For the full list of callbacks, see the Keras Callbacks API documentation. In this post, I will discuss ModelCheckpoint. The purpose of the ModelCheckpoint callback is exactly what it sounds like. It saves the Keras model, or just the model weights, at some frequency that you determine. Want the model saved every epoch? No problem. Maybe after a certain number of batches? Again, no problem. Want to only keep the best model based on accuracy or loss values? You guessed it - no problem. As with other callbacks, you need to define how you want ModelCheckpoint to work and pass those values to model.fit() when you train the model. I'm sure you can read the full documentation on your own, so I'll just give an example from my own work and hit the high points. Here is some example code. Let's discuss the options above.
Here is the output from some training I did using the ModelCheckpoint I defined above. In epochs 3 and 4, the loss decreased, so the model weights were saved. In epochs 5 and 6, the loss did not decreased, so the weights were not saved. Now that we have our weights saved, we can later go back and load them for inference. I won't cover that (although it is simple) for the sake of brevity. Jason Brownlee (@TeachTheMachine) of Machine Learning Mastery has a very good tutorial on how to do it. He always writes great stuff! If you have questions and want to connect, you can message me on LinkedIn or Twitter. Also, follow me on Twitter @pacejohn, LinkedIn https://www.linkedin.com/in/john-pace-phd-20b87070/, and follow my company, Mark III Systems, on Twitter @markiiisystems. #artificialintelligence #ai #machinelearning #ml #tensorflow #keras #neuralnetworks #deeplearning #modelcheckpoint #hyperparameters #callbacks |