He has formal PhD training in mathematics, artificial intelligence, and computational science. He has experience with both supervised and unsupervised machine learning methods as for supervised machine learning, he proposed a learning scheme based on Extreme Learning Machine (ELM) and L1/2 regularization for a double parallel feedforward neural network. Although widely used as a fast learning method for feedforward networks with a single hidden layer, a key problem for ELM is the choice of the (minimum) number of the hidden nodes. To resolve this problem, we proposed to combine the L1/2 regularization method, which has recently gained popularity in informatics, with ELM. Our experiments showed that the involvement of the L1/2 regularizer in DPFNN with ELM results in less hidden nodes but equally good performance.
Similarly, he proposed unsupervised learning Self-organizing Maps (SOM) with multiple neurons and its convergence. As formal convergence questions remained unanswered, we aimed to address this issue in my work. Specifically, we proposed that convergence proofs for the SOMO algorithm could be developed using a specific distance measure. We therefore used numerical simulation examples using two benchmark test functions to support our theoretical findings, illustrating that the distance between neurons decreases at each iteration and finally converges to zero. We also showed that the function value of the “winner” in the network decreases after each iteration. SOMO was then benchmarked against the conventional particle swarm optimization algorithm, with preliminary results showing that SOMO can provide a more accurate solution for the case of large population sizes.
Also, he has significant experience using machine learning in genetic and genomic techniques to elucidate mechanisms underlying neurological disorders and cancer. I am strongly committed to translational genomics research focusing on human genetics, machine learning, and electronic-based analysis of Big Data in healthcare and medical research.