- James B. Duke Distinguished Professor of Electrical and Computer Engineering
- Professor of Electrical and Computer Engineering
- Professor of Mathematics (Secondary)
- Professor of Computer Science (Secondary)
- Faculty Network Member of the Duke Institute for Brain Sciences
By appointment. Contact via e-mail.
Guillermo Sapiro received his B.Sc. (summa cum laude), M.Sc., and Ph.D. from the Department of Electrical Engineering at the Technion, Israel Institute of Technology, in 1989, 1991, and 1993 respectively. After post-doctoral research at MIT, Dr. Sapiro became Member of Technical Staff at the research facilities of HP Labs in Palo Alto, California. He was with the Department of Electrical and Computer Engineering at the University of Minnesota, where he held the position of Distinguished McKnight University Professor and Vincentine Hermes-Luh Chair in Electrical and Computer Engineering. Currently he is the Edmund T. Pratt, Jr. School Professor with Duke University.
G. Sapiro works on theory and applications in computer vision, computer graphics, medical imaging, image analysis, and machine learning. He has authored and co-authored over 300 papers in these areas and has written a book published by Cambridge University Press, January 2001.
G. Sapiro was awarded the Gutwirth Scholarship for Special Excellence in Graduate Studies in 1991, the Ollendorff Fellowship for Excellence in Vision and Image Understanding Work in 1992, the Rothschild Fellowship for Post-Doctoral Studies in 1993, the Office of Naval Research Young Investigator Award in 1998, the Presidential Early Career Awards for Scientist and Engineers (PECASE) in 1998, the National Science Foundation Career Award in 1999, and the National Security Science and Engineering Faculty Fellowship in 2010. He received the test of time award at ICCV 2011. He was elected to the American Academy of Arts and Sciences on 2018.
G. Sapiro is a Fellow of IEEE and SIAM.
G. Sapiro was the founding Editor-in-Chief of the SIAM Journal on Imaging Sciences.
Point-of-care cellular and molecular pathology of breast tumors on a cell phone awarded by National Institutes of Health (Co Investigator). 2020 to 2025
Learning and Explaining Information Dynamics from Overhead Imagery awarded by (Principal Investigator). 2019 to 2024
Novel Approaches to Infant Screening for ASD in Pediatric Primary Care awarded by National Institutes of Health (PD/PI). 2019 to 2024
Novel see and treat strategies for cervical cancer prevention in low-resource settings awarded by National Institutes of Health (Co-Principal Investigator). 2019 to 2024
Scalable Computational Platform For Active Closed-Loop Behavioral Coding in Autism Spectrum Disorder awarded by National Institutes of Health (Principal Investigator). 2019 to 2023
HDR TRIPODS: Innovations in Data Science: Integrating Stochastic Modeling, Data Representation, and Algorithms awarded by National Science Foundation (Senior Investigator). 2019 to 2022
Uncovering Population-Level Cellular Relationships to Behavior via Mesoscale Networks awarded by National Institutes of Health (Co Investigator). 2019 to 2022
SCH: INT: Computational Tools for Avoidant/Restrictive Food Intake Disorder awarded by National Science Foundation (Principal Investigator). 2019 to 2022
Digital Behavioral Outcome Measures for Autism awarded by (Principal Investigator). 2019 to 2022
Tailoring treatment targets for early autism intervention in Africa awarded by National Institutes of Health (Co Investigator). 2019 to 2021
Pisharady, Pramod Kumar, et al. Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI. Vol. 9349, 2015, pp. 117–24. Epmc, doi:10.1007/978-3-319-24553-9_15. Full Text
Sprechmann, P., et al. “Supervised non-negative matrix factorization for audio source separation.” Applied and Numerical Harmonic Analysis, 2015, pp. 407–20. Scopus, doi:10.1007/978-3-319-20188-7_16. Full Text
Tenenbaum, Elena J., et al. “A Six-Minute Measure of Vocalizations in Toddlers with Autism Spectrum Disorder.” Autism Research : Official Journal of the International Society for Autism Research, Mar. 2020. Epmc, doi:10.1002/aur.2293. Full Text Open Access Copy
Dawson, Geraldine, et al. “Author Correction: Atypical postural control can be detected via computer vision analysis in toddlers with autism spectrum disorder.” Sci Rep, vol. 10, no. 1, Jan. 2020, p. 616. Pubmed, doi:10.1038/s41598-020-57570-1. Full Text
Simhal, Anish K., et al. “Multifaceted Changes in Synaptic Composition and Astrocytic Involvement in a Mouse Model of Fragile X Syndrome.” Scientific Reports, vol. 9, no. 1, Sept. 2019, p. 13855. Epmc, doi:10.1038/s41598-019-50240-x. Full Text
Asiedu, Mercy Nyamewaa, et al. “Development of Algorithms for Automated Detection of Cervical Pre-Cancers With a Low-Cost, Point-of-Care, Pocket Colposcope.” Ieee Trans Biomed Eng, vol. 66, no. 8, Aug. 2019, pp. 2306–18. Pubmed, doi:10.1109/TBME.2018.2887208. Full Text
Dawson, Geraldine, and Guillermo Sapiro. “Potential for Digital Behavioral Measurement Tools to Transform the Detection and Diagnosis of Autism Spectrum Disorder.” Jama Pediatr, vol. 173, no. 4, Apr. 2019, pp. 305–06. Pubmed, doi:10.1001/jamapediatrics.2018.5269. Full Text
Campbell, Kathleen, et al. “Computer vision analysis captures atypical attention in toddlers with autism.” Autism, vol. 23, no. 3, Apr. 2019, pp. 619–28. Pubmed, doi:10.1177/1362361318766247. Full Text
Sapiro, G., et al. “Computer vision and behavioral phenotyping: an autism case study.” Current Opinion in Biomedical Engineering, vol. 9, Mar. 2019, pp. 14–20. Scopus, doi:10.1016/j.cobme.2018.12.002. Full Text
Shamir, Reuben R., et al. “Microelectrode Recordings Validate the Clinical Visualization of Subthalamic-Nucleus Based on 7T Magnetic Resonance Imaging and Machine Learning for Deep Brain Stimulation Surgery.” Neurosurgery, vol. 84, no. 3, Mar. 2019, pp. 749–57. Epmc, doi:10.1093/neuros/nyy212. Full Text
Kim, Jinyoung, et al. “Automatic localization of the subthalamic nucleus on patient-specific clinical MRI by incorporating 7 T MRI and machine learning: Application in deep brain stimulation.” Human Brain Mapping, vol. 40, no. 2, Feb. 2019, pp. 679–98. Epmc, doi:10.1002/hbm.24404. Full Text
Cheng, X., et al. “RoTDCF: Decomposition of convolutional filters for rotation-equivariant deep networks.” 7th International Conference on Learning Representations, Iclr 2019, Jan. 2019. Open Access Copy
Martinez, N., et al. “Non-Contact Photoplethysmogram and Instantaneous Heart Rate Estimation from Infrared Face Video.” Proceedings International Conference on Image Processing, Icip, vol. 2019-September, 2019, pp. 2020–24. Scopus, doi:10.1109/ICIP.2019.8803109. Full Text
Bertran, M., et al. “Adversarially learned representations for information obfuscation and inference.” 36th International Conference on Machine Learning, Icml 2019, vol. 2019-June, 2019, pp. 960–74.
Ahn, H. K., et al. “Classifying Pump-Probe Images of Melanocytic Lesions Using the WEYL Transform.” Icassp, Ieee International Conference on Acoustics, Speech and Signal Processing Proceedings, vol. 2018-April, 2018, pp. 4209–13. Scopus, doi:10.1109/ICASSP.2018.8461298. Full Text
Giryes, R., et al. “The Learned Inexact Project Gradient Descent Algorithm.” Icassp, Ieee International Conference on Acoustics, Speech and Signal Processing Proceedings, vol. 2018-April, 2018, pp. 6767–71. Scopus, doi:10.1109/ICASSP.2018.8462136. Full Text
Asiedu, M. N., et al. “Image processing and machine learning techniques to automate diagnosis of Lugol's iodine cervigrams for a low-cost point-of-care digital colposcope.” Progress in Biomedical Optics and Imaging Proceedings of Spie, vol. 10485, 2018. Scopus, doi:10.1117/12.2282792. Full Text
Su, S., et al. “Deep video deblurring for hand-held cameras.” Proceedings 30th Ieee Conference on Computer Vision and Pattern Recognition, Cvpr 2017, vol. 2017-January, 2017, pp. 237–46. Scopus, doi:10.1109/CVPR.2017.33. Full Text
Tepper, M., and G. Sapiro. “Nonnegative matrix underapproximation for robust multiple model fitting.” Proceedings 30th Ieee Conference on Computer Vision and Pattern Recognition, Cvpr 2017, vol. 2017-January, 2017, pp. 655–63. Scopus, doi:10.1109/CVPR.2017.77. Full Text
Pisharady, Pramod Kumar, et al. “A Sparse Bayesian Learning Algorithm for White Matter Parameter Estimation from Compressed Multi-shell Diffusion MRI.” Medical Image Computing and Computer Assisted Intervention : Miccai ... International Conference on Medical Image Computing and Computer Assisted Intervention, vol. 10433, 2017, pp. 602–10. Epmc, doi:10.1007/978-3-319-66182-7_69. Full Text
Sokolić, J., et al. “Generalization error of deep neural networks: Role of classification margin and data structure.” 2017 12th International Conference on Sampling Theory and Applications, Sampta 2017, 2017, pp. 147–51. Scopus, doi:10.1109/SAMPTA.2017.8024476. Full Text
Qiu, Q., et al. “Intelligent synthesis driven model calibration: framework and face recognition application.” Proceedings 2017 Ieee International Conference on Computer Vision Workshops, Iccvw 2017, vol. 2018-January, 2017, pp. 2564–72. Scopus, doi:10.1109/ICCVW.2017.301. Full Text
The Lives of Things. Consultant. (2015)
The Nasher Museum has one of the most important collections of medieval art in an American university. These objects are mounted against the white walls of the Nasher Museum with short labels by way of identification. Yet how many of the visitors to the museum understand that these objects were once brightly painted, and once part of full-length figures that enriched the doorways and facades of medieval churches - that they were integrated into much larger decorative programs? The Lives of Things is a collaboration between Engineering and Art, Art History and Visual Studies. Computer scientists and engineers work with artists and art historians, using programming and graphical user interface design for artistic and historical contextualization with augmented reality and interactive capabilities. This eclectic blend of knowledges and capabilities brings new possibilities for interdisciplinary teamwork of broad impact and for horizontal knowledge transmission. Our goal is to use emerging technologies for developing a new model of the engaged museum that reaches out to involve the public of all ages in reconnecting works of art to their original context (e.g., chapels, church portals, or facades) through interactive and gaming displays. Our first installation is now part of the Nasher permanent collection. This is a collaboration with Dr. Tepper (engineering leader), Prof. Olson (art history leader) and Prof. Bruzelius.