Home Introduction Data Methdology Experiment & Results Conclusion

Conclusion and Future Work

We adapted a deep learning-based self-configuring model for biomedical image segmentation, called nnU-Net, into our existing pipeline to replace the sophisticated and cumbersome manual thresholding-based segmentation method and the unnecessary intermediate step of image translation. This newly developed pipeline can segment out epithelium, lumen and stroma within the prostate glands directly from the 3D H&E stained biomedical images taken from the microscopes. The results, though considered preliminary, looks good enough and promising. The quantitative measurements on test sets, though not perfect, can still be considered as good enough given small amount of training data and relatively short training session. We also realized that segmentation directly from H&E stained images is significantly tougher than segmentation from CK8 immunofluorescence labeled images, as CK8 already has the high contrast and obvious features which help segmentation. In the near future, we plan to retrain the model using larger 3D image chunks that can fully utilize our computing resources (current training data only occupied ~1/3 of our GPU memory). Larger 3D chunks help retain more context information which is essential for biomedical image segmentation. We also plan to train for more epochs. Better performance should be seen after ~1000 epochs of training.