Deep Neural Network for Sea Surface Temperature Prediction: References

cover
29 May 2024

Authors:

(1) Yuxin Meng;

(2) Feng Gao;

(3) Eric Rigall;

(4) Ran Dong;

(5) Junyu Dong;

(6) Qian Du.

REFERENCES

[1] A. F. Shchepetkin and J. C. McWilliams, “The regional oceanic modeling system (ROMS): A split-explicit, free-surface, topographyfollowing-coordinate oceanic model,” Ocean Modeling, vol. 9, no. 4, pp. 347–404, 2005.

[2] R. Jacob, C. Schafer, I. Foster, et al. “Computational design and performance of the Fast Ocean Atmosphere Model,” Proceedings of International Conference on Computational Science. 2001, pp. 175–184.

[3] C. Chen, R. C. Beardsley, G. Cowles, et al. “An unstructured grid, finitevolume coastal ocean model: FVCOM system,” Oceanography, vol. 19, no. 1, pp. 78–89, 2015.

[4] E. P. Chassignet, H. E. Hurlburt, O. M. Smedstad, et al. “The HYCOM (hybrid coordinate ocean model) data assimilative system,” Journal of Marine Systems, vol. 65, no. 1, pp. 60–83, 2007.

[5] Y. LeCun, Y. Bengio, G. Hinton. “Deep learning,” Nature, vol. 521, pp. 436–444, 2015.

[6] P. C. Bermant, M. M. Bronstein, R. J. Wood, et al. “Deep machine learning techniques for the detection and classification of sperm whale bioacoustics,” Scientific Reports, vol. 9, no. 1, pp. 1–10, 2019.

[7] V. Allken V, N. O. Handegard, S. Rosen, et al. “Fish species identification using a convolutional neural network trained on synthetic data,” ICES Journal of Marine Science, vol. 76, no. 1, pp. 342–349, 2019.

[8] E. Lima, X. Sun, J. Dong, et al. “Learning and transferring convolutional neural network knowledge to ocean front recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 3, pp. 354–358, 2017.

[9] L. Xu, X. Wang, X. Wang, “Shipwrecks detection based on deep generation network and transfer learning with small amount of sonar images,” IEEE Data Driven Control and Learning Systems Conference (DDCLS), 2019, pp. 638–643.

[10] Y. Ren, X. Li, W. Zhang, “A data-driven deep learning model for weekly sea ice concentration prediction of the Pan-Arctic during the melting season,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–19, 2022.

[11] M. Reichstein, G. Camps-Valls, B. Stevens, et al. “Deep learning and process understanding for data-driven Earth system science,” Nature, vol. 566, no. 7743, pp. 195–204, 2019.

[12] N. D. Brenowitz, C. S. Bretherton. “Prognostic validation of a neural network unified physics parameterization,” Geophysical Research Letters, vol. 45, no. 12, pp. 6289–6298, 2018.

[13] O. Pannekoucke and R. Fablet. “PDE-NetGen 1.0: from symbolic partial differential equation (PDE) representations of physical processes to trainable neural network representations,” Geoscientific Model Development, vol. 13, no. 7, pp. 3373–3382, 2020.

[14] K. Patil, M. C. Deo, M. Ravichandran. “Prediction of sea surface temperature by combining numerical and neural techniques,” Journal of Atmospheric and Oceanic Technology, vol. 33, no. 8, pp. 1715–1726, 2016.

[15] Y. G. Ham, J. H. Kim, J. J. Luo. “Deep learning for multi-year ENSO forecasts,” Nature, vol. 573, no. 7775, pp. 568–572, 2019.

[16] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al. “Generative adversarial nets,”Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2014.

[17] L. Yang, D. Zhang, G. E. Karniadakis. “Physics-informed generative adversarial networks for stochastic differential equations,” SIAM Journal on Scientific Computing, vol. 42, no. 1, pp. A292–A317, 2020.

[18] B. Lutjens, B. Leshchinskiy, C. Requena-Mesa, et al. “Physics-informed ¨ GANs for coastal flood visualization,” arXiv preprint arXiv:2010.08103, 2020.

[19] Q. Zheng, L. Zeng, G. E. Karniadakis, “Physics-informed semantic inpainting: Application to geostatistical modeling,” Journal of Computational Physics, vol. 419, pp. 1–10, 2020.

[20] X. Shi, Z. Chen, H. Wang, et al. “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2015.

[21] J. Gu, Z. Wang, J. Kuen, et al. “Recent advances in convolutional neural networks,” Pattern Recognition, pp. 354–377, 2018.

[22] H. Ge, Z. Yan, W. Yu, et al. “An attention mechanism based convolutional LSTM network for video action recognition,” Multimedia Tools and Applications‘, vol. 78, no. 14, pp. 20533–20556, 2019.

[23] W. Che, and S. Peng, “Convolutional LSTM Networks and RGB-D Video for Human Motion Recognition,” Proceedings of IEEE Information Technology and Mechatronics Engineering Conference (ITOEC), 2018, pp. 951–955.

[24] I. D. Lins, M. Araujo, et al. “Prediction of sea surface temperature in the tropical Atlantic by support vector machines,” Computational Statistics and Data Analysis, vol. 61, pp. 187–198, 2013.

[25] Patil K, Deo M C. “Basin-scale prediction of sea surface temperature with artificial neural networks,” Journal of Atmospheric and Oceanic Technology, vol. 35, no. 7, pp. 1441–1455, 2018.

[26] Q. Zhang, H. Wang, J. Dong, et al. “Prediction of sea surface temperature using long short-term memory,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1745–1749, 2017.

[27] Y. Yang, J. Dong, X. Sun X, et al. “A CFCC-LSTM model for sea surface temperature prediction,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 2, pp. 207–211, 2017.

[28] K. Patil, M. C. Deo, “Prediction of daily sea surface temperature using efficient neural networks,” Ocean Dynamics, vol. 67, no. 3, pp. 357–368, 2017.

[29] S. Ouala, C. Herzet, R. Fablet, “Sea surface temperature prediction and reconstruction using patch-level neural network representations,” Proceedings of IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 5628–5631.

[30] C. Shorten, T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, pp. 1–48, 2017.

[31] H. Bagherinezhad, M. Horton, M. Rastegari, et al. “Label refinery: Improving Imagenet classification through label progression,” arXiv preprint arXiv:1805.02641, 2018.

[32] K. Chatfield, K. Simonyan, A. Vedaldi, et al. “Return of the devil in the details: Delving deep into convolutional nets,” Proceedings of the British Machine Vision Conference (BMVC), 2014.

[33] A. Jurio, M. Pagola, M. Galar, et al. “A comparison study of different color spaces in clustering based image segmentation,” Proceedings of International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2010, pp. 532–541.

[34] Q. You, J. Luo, H. Jin, et al. “Robust image sentiment analysis using progressively trained and domain transferred deep networks,” Proceedings of the AAAI Conference on Artificial Intelligence, 2015, pp. 381–388.

[35] Z. Zhong, L. Zheng, G. Kang, et al. “Random erasing data augmentation,” Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 13001–13008.

[36] T. DeVries, G. W. Taylor, “Improved regularization of convolutional neural networks with Cutout,” arXiv preprint arXiv:1708.04552, 2017.

[37] A. Mikołajczyk, M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” Proceedings of International Interdisciplinary PhD workshop (IIPhDW), 2018, pp. 117–122.

[38] S. M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.

[39] J. Su, D. V. Vargas, K. Sakurai, “One pixel attack for fooling deep neural networks,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828–841, 2019.

[40] M. Zajac, K. Zołna, N. Rostamzadeh, et al. “Adversarial framing for image and video classification,” Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 10077-10078.

[41] S. Li, Y. Chen, Y. Peng, et al. “Learning more robust features with adversarial training,” arXiv preprint arXiv:1804.07757, 2018.

[42] L. A. Gatys, A. S. Ecker, M. Bethge, “A neural algorithm of artistic style,” Journal of Vision vol. 16, no. 12, 2016.

[43] D. Ulyanov, A. Vedaldi, V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016.

[44] P. Jackson, A. Abarghouei, S. Bonner, et al. “Style augmentation: Data augmentation via style randomization,” Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2019, pp. 83–92.

[45] J. Tobin, R. Fong, A. Ray, et al. “Domain randomization for transferring deep neural networks from simulation to the real world,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 23–30.

[46] C. Summers, and M. Dinneen, “Improved mixed-example data augmentation,” Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 1262–1270.

[47] D. Liang, F. Yang, T. Zhang, et al. “Understanding Mixup training methods,” IEEE Access, vol. 6, pp. 58774–58783, 2018.

[48] R. Takahashi, T. Matsubara, K. Uehara, “Augmentation using random image cropping and patching for deep CNNs,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 9, pp. 2917– 2931, 2019.

[49] T. Konno, and M. Iwazume, “Icing on the cake: An easy and quick post-learnig method you can try after deep learning,” arXiv preprint arXiv:1807.06540, 2018.

[50] T. DeVries, and G. Taylor, “Dataset augmentation in feature space,” arXiv preprint arXiv:1702.05538, 2017.

[51] F. Moreno-Barea, F. Strazzera, J. Jerez, et al. “Forward noise adjustment scheme for data augmentation,” Proceedings of IEEE Symposium Series on Computational Intelligence (SSCI), 2018, pp. 728–734.

[52] M. Frid-Adar, D. Idit, E. Klang, et al. “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321-331, 2018.

[53] J. Zhu, Y. Shen, D. Zhao, et al. “In-domain GAN inversion for real image editing,” Proceedings of European Conference on Computer Vision (ECCV), 2020, pp. 592–608.

[54] Simonyan K, Zisserman A. “Very deep convolutional networks for largescale image recognition,” Proceedings of International Conference on Learning Representations (ICLR), 2015, pp. 1–14.

[55] GHRSST data, https://www.ghrsst.org (accessed: July. 3, 2022)

[56] HYCOM data, https://www.hycom.org (accessed: July. 3, 2022)

[57] Zhu J Y, Krahenb ¨ uhl P, Shechtman E, et al. “Generative visual ma- ¨ nipulation on the natural image manifold,” Proceedings of European Conference on Computer Vision (ECCV), 2016, pp. 597–613.

[58] A. Larsen, S. Sønderby, H. Larochelle, et al. “Autoencoding beyond pixels using a learned similarity metric,” Proceedings of International Conference on Machine Learning (ICML), 2016, pp. 1558–1566.

Yuxin Meng received the B.Eng. degree in computer science and technology from the Anhui University of Science and Technology, Huainan, China, in 2010. She is currently pursuing the Ph.D. degree with the Vision Lab, Ocean University of China, Qingdao, China, supervised by Prof. Junyu Dong. Her research interests include image processing and computer vision.

Feng Gao (Member, IEEE) received the B.Sc degree in software engineering from Chongqing University, Chongqing, China, in 2008, and the Ph.D. degree in computer science and technology from Beihang University, Beijing, China, in 2015. He is currently an Associate Professor with the School of Information Science and Engineering, Ocean University of China. His research interests include remote sensing image analysis, pattern recognition and machine learning.

Eric Rigall received the Engineering degree from the Graduate School of Engineering, University of Nantes, Nantes, France, in 2018. He is currently pursuing the Ph.D. degree with the Vision Laboratory, Ocean University of China, Qingdao, China, supervised by Prof. Junyu Dong. His research interests include radio-frequency identification (RFID)-based positioning, signal and image processing, machine learning, and computer vision.

Ran Dong received the B.Sc degree in Mathematics and Statistics from Donghua University, Shanghai, China, in 2014, and the Ph.D. degree in Mathematics and Statistics from University of Strathclyde, United Kingdom, in 2020. She is currently a Lecturer with the School of Mathematical Science, Ocean University of China. Her research interests include artificial intelligence, mathematics, and statistics.

Junyu Dong (Member, IEEE) received the B.Sc. and M.Sc. degrees from the Department of Applied Mathematics, Ocean University of China, Qingdao, China, in 1993 and 1999, respectively, and the Ph.D. degree in image processing from the Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom, in 2003. He is currently a Professor and Dean with the School of Computer Science and Technology, Ocean University of China. His research interests include visual information analysis and understanding, machine learning and underwater image processing.

Qian Du (Fellow, IEEE) received the Ph.D. degree in electrical engineering from the University of Maryland at Baltimore, Baltimore, MD, USA, in 2000. She is currently the Bobby Shackouls Professor with the Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS, USA. Her research interests include hyperspectral remote sensing image analysis and applications, and machine learning. Dr. Du was the recipient of the 2010 Best Reviewer Award from the IEEE Geoscience and Remote Sensing Society (GRSS). She was a Co-Chair for the Data Fusion Technical Committee of the IEEE GRSS from 2009 to 2013, the Chair for the Remote Sensing and Mapping Technical Committee of International Association for Pattern Recognition from 2010 to 2014, and the General Chair for the Fourth IEEE GRSS Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing held at Shanghai, China, in 2012. She was an Associate Editor for the PATTERN RECOGNITION, and IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. From 2016 to 2020, she was the Editor-in-Chief of the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATION AND REMOTE SENSING. She is currently a member of the IEEE Periodicals Review and Advisory Committee and SPIE Publications Committee. She is a Fellow of SPIE-International Society for Optics and Photonics (SPIE).

This paper is available on arxiv under CC 4.0 license.